[SCore-users-jp] Re: [SCore-users] ULT:PANIC Try to break thread

Jure Jerman jure.jerman @ rzs-hm.si
2004年 12月 20日 (月) 16:37:04 JST


Hello,

thank you very much for your reply and the way to go on: here is the output of scored_dev
command:

0> Attaching GDB: ULT PANIC
Using host libthread_db library "/lib/libthread_db.so.1".
0x44ec00f7 in wait () from /lib/i686/libc.so.6
#0  0x44ec00f7 in wait () from /lib/i686/libc.so.6
#1  0x080d791a in score_attach_debugger (
     message=0x4544 <Address 0x4544 out of bounds>, exno=15) at ../message.c:289
#2  0x080d586c in ult_panic (
     format=0x81297e0 "Try to break thread in an atomic region.")
     at ../trace.c:141
#3  0x0804fd95 in mpcSyncRead (valp=0x4054cd04, node=0, argfirst=
         {laddr = 0x81e9acc, naddr = 136223436, b32s = {d1 = 136223436, d2 = 0}, b8s = {d1 = 204 '�, d2 = 154 '\232', d3 
= 30 '\036', d4 = 8 '\b', d5 = 0 '\0', d6 = 0 '\0', d7 = 0 '\0', d8 = 0 '\0'}}) at mpcxx_sync_inlines.h:199
#4  0x08051878 in Sync<int>::read(int&) (this=0x0, ap=@0x4054cd04)
     at mpcxx_mttl.h:250
#5  0x0808d38d in flush_pe_buffer(PE*) (pe=0x81ee8b0) at ../fepio.cc:434
#6  0x08067571 in flush_pe (pe=0x81ee8b0) at ../pe.cc:513
#7  0x080675ff in free_pe(PE*) (pe=0x81ee8b0) at ../pe.cc:528
#8  0x08067c65 in free_ppe (ppe=0x81ed820, flag_dontclear=0)
     at ../pegroup.cc:195
#9  0x08067e3f in createPPE (peg=0x81ec410, jobstep=1, proc_no=0,
     proc_id=-512, loc=-1) at ../pegroup.cc:256
#10 0x08068bcc in fork_pegroup(PeGroup*, int, int, int) (peg=0x81ec410,
     jobstep=1, nppe=1, loc=-1) at ../pegroup.cc:678
#11 0x0809714c in fork_all(GlobalPtr<ControlTree>, int, int, int) (node_gp=
           {gval = {gp = {pe = 0, addr = {laddr = 0x81df788, naddr = 136181640, b32s = {d1 = 136181640, d2 = 0}, b8s = 
{d1 = 136 '\210', d2 = 247 '�, d3 = 29 '\035', d4 = 8 '\b', d5 = 0 '\0', d6 = 0 '\0', d7 = 0 '\0', d8 =0 '\0'}}, size = 
1564}}}, jobstep=1, nppe=1, loc=-1) at ../control.cc:856
#12 0x08082c85 in fork_subjob(GlobalPtr<Subjob>, int, int) (subjob_gp=
           {gval = {gp = {pe = 0, addr = {laddr = 0x81e87c8, naddr = 136218568, b32s = {d1 = 136218568, d2 = 0}, b8s = 
{d1 = 200 '�, d2 = 135 '\207', d3 = 30 '\036', d4 = 8 '\b', d5 = 0 '\0', d6 = 0 '\0', d7 = 0 '\0', d8 =0 '\0'}}, size = 
6120}}}, jobstep=-512, nppe=-512) at ../subjob.cc:830
#13 0x0808c301 in _ainvoker3<int, GlobalPtr<Subjob>, int, int>::invoke() ()
     at mpcxx_mttl.h:770
#14 0x08080a9c in syscall_done_fep(GlobalPtr<FEP>) () at ../subjob.cc:195
/opt/score5.8.2/deploy/score.gdb:1: Error in sourced command file:
Previous frame inner to this frame (corrupt stack?)


Thank you,

Jure Jerman

Atsushi HORI wrote:
> Hi,
> 
> On 2004/12/15, at 16:30, Jure Jerman wrote:
> 
>> Hello,
>>
>> we did an upgrade from SCore 5.4 to 5.8.2 yesterday on our 14
>> node dual Xeon cluster and
>> scored reseted few times during last night with an error:
>>
>> <6> ULT:PANIC Try to break thread in an atomic region
>>
>> Any clue about the reason for the problem?
> 
> 
> This message means some internal error inside of SCore-D happens.
> 
> Could you please run "scored_dev" instead of "scored" ? The scored_dev 
> program is for developer version of scored and it will print some more 
> useful debug messages when some error happen. Please send the all the 
> error messages to me.
> 
> _______________________________________________
> SCore-users mailing list
> SCore-users @ pccluster.org
> http://www.pccluster.org/mailman/listinfo/score-users
> 
> 

_______________________________________________
SCore-users mailing list
SCore-users @ pccluster.org
http://www.pccluster.org/mailman/listinfo/score-users



SCore-users-jp メーリングリストの案内