Skip Menu |

This queue is for tickets about the POE CPAN distribution.

Report information
The Basics
Id: 69485
Status: resolved
Priority: 0/
Queue: POE

People
Owner: Nobody in particular
Requestors: TEAM [...] cpan.org
Cc:
AdminCc:

Bug Information
Severity: (no value)
Broken in: 1.311
Fixed in: (no value)



Subject: t/30_loops/io_poll/z_leolo_wheel_run.t hangs on Dragonfly x64 BSD
As mentioned briefly in IRC, seems that Dragonfly BSD has some difficulty with t/30_loops/io_poll/z_leolo_wheel_run.t, all other tests appear to pass cleanly. This is the x64 version of Dragonfly BSD, uname -a reports: DragonFly dragonfly-x64.rumah 2.10-RELEASE DragonFly v2.10.1.1.gf7ba0-RELEASE #3: Mon Apr 25 12:49:45 PDT 2011 root@pkgbox64.dragonflybsd.org:/usr/obj/usr/src/sys/X86_64_GENERIC x86_64 Fairly minimal install, just been using it for occasional CPAN module testing. Running this test directly halts after the ok 1 - Start step: DB<2> c 1..14 ok 1 - Start Breaking here reports that we're in loop_do_timeslice: ^CPOE::Kernel::loop_do_timeslice(lib/POE/Loop/IO_Poll.pm:288): 288: if (ASSERT_FILES) { DB<2> T . = POE::Kernel::loop_do_timeslice(ref(POE::Kernel)) called from file `lib/POE/Loop/IO_Poll.pm' line 384 . = POE::Kernel::loop_run(ref(POE::Kernel)) called from file `lib/POE/Kernel.pm' line 1210 . = POE::Kernel::run(ref(POE::Kernel)) called from file `/home/cpan/perl5/lib/perl5/POE/Test/Loops/z_leolo_wheel_run.pm' line 40 $ = require 'z_leolo_wheel_run.pm' called from file `t/30_loops/io_poll/z_leolo_wheel_run.t' line 26 Turning on full autotrace gives these as the last lines before the process gets "stuck" - might be misleading since I haven't gone through the trace for fork()s yet: POE::Wheel::allocate_wheel_id(lib/POE/Wheel.pm:21): 21: while (1) { POE::Wheel::allocate_wheel_id(lib/POE/Wheel.pm:22): 22: last unless exists $active_wheel_ids{ ++$current_id }; POE::Wheel::allocate_wheel_id(lib/POE/Wheel.pm:24): 24: return $active_wheel_ids{$current_id} = $current_id; POE::Wheel::Run::new(lib/POE/Wheel/Run.pm:561): 561: $poe_kernel->_data_sig_unmask_all if $must_unmask; POE::Kernel::_data_sig_unmask_all(lib/POE/Resource/Signals.pm:815): 815: return if RUNNING_IN_HELL; POE::Kernel::CODE(0x802a5c810)(lib/POE/Kernel.pm:90): 90: *{ __PACKAGE__ . '::RUNNING_IN_HELL' } = sub { 0 }; POE::Kernel::_data_sig_unmask_all(lib/POE/Resource/Signals.pm:816): 816: my $self = $poe_kernel; POE::Kernel::_data_sig_unmask_all(lib/POE/Resource/Signals.pm:817): 817: unless( $signal_mask_none ) { POE::Kernel::_data_sig_unmask_all(lib/POE/Resource/Signals.pm:820): 820: my $mask_temp = POSIX::SigSet->new(); POE::Kernel::_data_sig_unmask_all(lib/POE/Resource/Signals.pm:821): 821: sigprocmask( SIG_SETMASK, $signal_mask_none, $mask_temp ) 822: or _trap "<sg> Unable to unmask all signals: $!"; POE::Wheel::Run::new(lib/POE/Wheel/Run.pm:564): 564: $sem_pipe_write = undef; 565: { POE::Wheel::Run::new(lib/POE/Wheel/Run.pm:566): 566: local $/ = "\n"; # TODO - Needed? POE::Wheel::Run::new(lib/POE/Wheel/Run.pm:566): 566: local $/ = "\n"; # TODO - Needed? POE::Wheel::Run::new(lib/POE/Wheel/Run.pm:567): 567: my $chldout = <$sem_pipe_read>; So it looks like maybe the readline on $sem_pipe_read is blocking. Setting a breakpoint there reports these two values seen in succession: $chldout = "go\n"; $chldout = "go\n"; Truss output attached, I'll try reading through the test to pin down what's happening in more detail since I'm not familiar with the POE::Wheel::Run source. cheers, Tom
Subject: leolo_wheel_run.log

Message body is not shown because it is too large.

The truss output isn't really working for me. I see it ends in a SIGINT, and it appears that poll() might be blocking shortly before that. This differs from the Perl debugger log that implicates a plain blocking readline. A consistent reproduction would help a lot. A temporary Dragonfly x64 shell might help me reproduce the problem at will and diagnose it.
This ticket is at an impasse. Downgrading to "stalled". It may be rejected in the future unless the requester (and/or other people) provides enough information to make headway on the problem.
Whatever it was seems to be fixed in POE 1.350. CPAN testers report three passing tests on Dragonfly and no failures: http://www.cpantesters.org/distro/P/POE? grade=1&perlmat=1&patches=2&oncpan=2&distmat=2&perlver=ALL&osname=dragonfly&versi on=1.350 Thanks to Chris Williams for pointing this out so I could close the ticket. :)