Skip Menu |

This queue is for tickets about the POE-Component-Client-HTTP CPAN distribution.

Report information
The Basics
Id: 26185
Status: rejected
Priority: 0/
Queue: POE-Component-Client-HTTP

People
Owner: Nobody in particular
Requestors: zloysystem [...] gmail.com
Cc:
AdminCc:

Bug Information
Severity: Critical
Broken in: 0.82
Fixed in: (no value)



Subject: Memory leak
There is a problem with memory leak in this script. Script requests page with 1000 parallel streams maximum and repeats this operation. It was tested for web server (the required page is loaded) and on the closed port (operation finished on timeout). Ten megabytes of RAM leaked during one hour. use warnings; use strict; use HTTP::Request::Common qw(GET POST); use POE; use POE::Component::Client::HTTP; use POE::Component::Client::Keepalive; $|++; #~~ constants my $threads_count = 1000; #threads count my $timeout = 2; #timeout for connect my $req = GET "http://127.0.0.1/"; #request :) my $complete = 0; my $pool = POE::Component::Client::Keepalive->new ( keep_alive => 0, max_open => 10000, max_per_host => 10000, timeout => $timeout, ); POE::Component::Client::HTTP->spawn ( Alias => 'ua', Timeout => $timeout, ConnectionManager => $pool, NoProxy => '', FollowRedirects => 5 ); POE::Session->create ( package_states => [ main => [ "_start", "got_response", "_stop" ] ] ); sub got_response { $complete++; print "Complete: $complete\n"; my $id = ARG0->[1]; #get request id $poe_kernel->post( "ua" => "request", "got_response", $req , $id); #add request }; sub _start { for(0..$threads_count-1) { $poe_kernel->post( "ua" => "request", "got_response", $req, $_); #add request }; print "Started $threads_count threads\n"; }; sub _stop { print "Stoped at " . scalar(localtime()) . "\n"; }; $poe_kernel->run();
I can't track down the problem in the test as it's written. Attempts to dump data to a file for diagnostics result in: Too many open files at rt-26185.pl line 40. This seems to indicate that the test case doesn't function normally. I scaled back the test to 10 (rather than 1000) "threads" so that it would leave filehandles left over. Dumps don't show an upward memory usage trend anymore: -rw------- 1 troc staff 321231 Feb 13 03:35 trace-100.dump -rw------- 1 troc staff 294738 Feb 13 03:35 trace-110.dump -rw------- 1 troc staff 316905 Feb 13 03:35 trace-120.dump -rw------- 1 troc staff 289401 Feb 13 03:36 trace-130.dump -rw------- 1 troc staff 301119 Feb 13 03:36 trace-140.dump -rw------- 1 troc staff 314662 Feb 13 03:36 trace-150.dump -rw------- 1 troc staff 258967 Feb 13 03:36 trace-160.dump -rw------- 1 troc staff 327582 Feb 13 03:36 trace-170.dump -rw------- 1 troc staff 293736 Feb 13 03:37 trace-180.dump -rw------- 1 troc staff 304026 Feb 13 03:37 trace-190.dump -rw------- 1 troc staff 311382 Feb 13 03:37 trace-200.dump -rw------- 1 troc staff 279150 Feb 13 03:37 trace-210.dump -rw------- 1 troc staff 308247 Feb 13 03:37 trace-220.dump -rw------- 1 troc staff 314234 Feb 13 03:38 trace-230.dump -rw------- 1 troc staff 309404 Feb 13 03:38 trace-240.dump -rw------- 1 troc staff 289914 Feb 13 03:38 trace-250.dump -rw------- 1 troc staff 302305 Feb 13 03:38 trace-260.dump -rw------- 1 troc staff 266486 Feb 13 03:39 trace-270.dump -rw------- 1 troc staff 289353 Feb 13 03:39 trace-280.dump -rw------- 1 troc staff 276112 Feb 13 03:39 trace-290.dump -rw------- 1 troc staff 319165 Feb 13 03:39 trace-300.dump -rw------- 1 troc staff 271072 Feb 13 03:39 trace-310.dump -rw------- 1 troc staff 289724 Feb 13 03:40 trace-320.dump -rw------- 1 troc staff 262076 Feb 13 03:40 trace-330.dump -rw------- 1 troc staff 309495 Feb 13 03:40 trace-340.dump -rw------- 1 troc staff 321917 Feb 13 03:40 trace-350.dump -rw------- 1 troc staff 300423 Feb 13 03:41 trace-360.dump -rw------- 1 troc staff 321917 Feb 13 03:41 trace-370.dump -rw------- 1 troc staff 262260 Feb 13 03:41 trace-380.dump -rw------- 1 troc staff 327304 Feb 13 03:41 trace-390.dump -rw------- 1 troc staff 286720 Feb 13 03:41 trace-400.dump -rw------- 1 troc staff 327304 Feb 13 03:42 trace-410.dump -rw------- 1 troc staff 284733 Feb 13 03:42 trace-420.dump -rw------- 1 troc staff 325372 Feb 13 03:42 trace-430.dump -rw------- 1 troc staff 258416 Feb 13 03:42 trace-440.dump -rw------- 1 troc staff 324915 Feb 13 03:43 trace-450.dump -rw------- 1 troc staff 285710 Feb 13 03:43 trace-460.dump -rw------- 1 troc staff 317140 Feb 13 03:43 trace-470.dump -rw------- 1 troc staff 296979 Feb 13 03:43 trace-480.dump -rw------- 1 troc staff 308454 Feb 13 03:43 trace-490.dump -rw------- 1 troc staff 315217 Feb 13 03:44 trace-500.dump -rw------- 1 troc staff 305027 Feb 13 03:44 trace-510.dump -rw------- 1 troc staff 282298 Feb 13 03:44 trace-520.dump -rw------- 1 troc staff 270819 Feb 13 03:44 trace-530.dump -rw------- 1 troc staff 289033 Feb 13 03:45 trace-540.dump -rw------- 1 troc staff 333336 Feb 13 03:45 trace-550.dump -rw------- 1 troc staff 301746 Feb 13 03:45 trace-560.dump -rw------- 1 troc staff 333337 Feb 13 03:45 trace-570.dump -rw------- 1 troc staff 343959 Feb 13 03:45 trace-580.dump -rw------- 1 troc staff 312551 Feb 13 03:46 trace-590.dump -rw------- 1 troc staff 339615 Feb 13 03:46 trace-600.dump -rw------- 1 troc staff 275942 Feb 13 03:46 trace-610.dump -rw------- 1 troc staff 316775 Feb 13 03:46 trace-620.dump -rw------- 1 troc staff 286471 Feb 13 03:47 trace-630.dump -rw------- 1 troc staff 305641 Feb 13 03:47 trace-640.dump -rw------- 1 troc staff 294339 Feb 13 03:47 trace-650.dump -rw------- 1 troc staff 295775 Feb 13 03:47 trace-660.dump -rw------- 1 troc staff 332725 Feb 13 03:48 trace-670.dump -rw------- 1 troc staff 290304 Feb 13 03:48 trace-680.dump -rw------- 1 troc staff 307180 Feb 13 03:48 trace-690.dump -rw------- 1 troc staff 299389 Feb 13 03:48 trace-700.dump -rw------- 1 troc staff 294974 Feb 13 03:48 trace-710.dump -rw------- 1 troc staff 337075 Feb 13 03:49 trace-720.dump I'm sure there's a leak in the code, but it seems to be limited to situations where the code wouldn't perform correctly anyway. Can you provide a functioning test case that illustrates the leak?
Can't reproduce the leak.