Subject: | Sys::Syslog 3.35 Memory Leak |
Date: | Tue, 23 Jul 2019 17:00:02 +0300 |
To: | bug-Sys-Syslog [...] rt.cpan.org |
From: | Alexander Shikov <a.shikov [...] dtel-ix.net> |
Hello!
Recently we migrated some Perl scripts to another server.
Both servers - FreeBSD.
On the old server there were 11.2-RELEASE, Perl 5.24 and Sys::Syslog 0.33_01.
On the new server there are 12.0-RELEASE, Perl 5.28 and Sys::Syslog 0.35.
On a new server we've noticed an intensive memory leak. After investigation the
trouble we've found that memory leak is caused by Sys::Syslog and can be re-produced
by simple script:
-----------------------------------------------------------------------------------------
#!/usr/local/bin/perl -w
use strict;
use Sys::Syslog qw(:standard :macros);
openlog('yarpd-syslog', 'ndelay', LOG_LOCAL2);
my $c = 0;
my @chars = ("A".."Z", "a".."z", "0".."9", "." , "-", "_" );
while ($c++ <= 10000) {
my $string = '';
$string .= $chars[rand @chars] for 1..32;
for (1..512) {
syslog(LOG_DEBUG, "DEBUG LOG %d/%d RANDOM STRING: %s", $c, $_, $string );
}
sleep 1;
}
-----------------------------------------------------------------------------------------
Add the line below to syslog.conf to save log to file:
local2.* /var/log/yarpd.log
Then start script and check out the log file with tail -f.
Right after script is started 'top' shows that memory utilization starts to grow rapidly:
last pid: 44500; load averages: 1.12, 1.19, 1.22 up 21+05:23:33 15:21:28
105 processes: 2 running, 103 sleeping
CPU: 43.2% user, 0.0% nice, 7.2% system, 0.4% interrupt, 49.2% idle
Mem: 427M Active, 189M Inact, 6452K Laundry, 448M Wired, 200M Buf, 892M Free
Swap: 4096M Total, 222M Used, 3874M Free, 5% Inuse
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
45209 root 1 22 0 19M 8956K nanslp 0 0:00 1.11% perl
[...]
45209 root 1 22 0 196M 186M nanslp 1 0:04 4.50% perl
[...]
45209 root 1 22 0 266M 258M nanslp 0 0:05 4.64% perl
[...]
45209 root 1 22 0 541M 534M nanslp 0 0:11 2.29% perl
Tested on physical machine, on virtual machine - the situation is the same on both.
It just depends on the number of messages per second. 512 was enough for VDS to
trigger the problem, on physical machine I tested it with 9000.
I tested the same script on old server as well, and there everything is fine.
It was tested with any msg per second: 512, 4000, 9000 - problem is absent.
Possibly there is a bug introduced between 0.33 and 0.35 versions.
Thanks.
--
Alexander Shikov
Technical Staff, Digital Telecom IX
Tel.: +380 44 201 14 07
Mob.: +380 50 410 30 57
http://dtel-ix.net/