Skip Menu |

This queue is for tickets about the REST-Neo4p CPAN distribution.

Report information
The Basics
Id: 81548
Status: resolved
Priority: 0/
Queue: REST-Neo4p

People
Owner: maj.fortinbras [...] gmail.com
Requestors: joseph.guhlin [...] gmail.com
Cc:
AdminCc:

Bug Information
Severity: Important
Broken in: 0.2011
Fixed in: 0.2254



Subject: Stops talking to the server after 1,019 queries.
After 1,019 cypher queries it errors out with "Can't connect to 127.0.0.1:7474 (timeout)(after 3 retries)" I've increased the values retry and timeout values, and am getting the problem even on a toy example(also attached as a file). Comment out one or the other and both fail while executing 1,020. I think this is related to an earlier problem I've been having. use REST::Neo4p; REST::Neo4p->connect( "http://127.0.0.1:7474" ); my $statement = 'START x=node(0) RETURN x'; foreach my $i (0..5000) { $node = REST::Neo4p->get_node_by_id(0); } foreach my $i (0..5000) { warn "On $i"; my $query = REST::Neo4p::Query->new($statement); $query->execute(); }
Subject: test.pl
use REST::Neo4p; REST::Neo4p->connect( "http://127.0.0.1:7474" ); my $statement = 'START x=node(0) RETURN x'; foreach my $i (0..5000) { $node = REST::Neo4p->get_node_by_id(0); } foreach my $i (0..5000) { warn "On $i"; my $query = REST::Neo4p::Query->new($statement); $query->execute(); }
From: joseph.guhlin [...] gmail.com
Also, I don't have this problem with index queries at all, and routinely run over 60k of them in one go of a script. Not certain why that part is unaffected.
From: joseph.guhlin [...] gmail.com
One more note, re-using the query object allows it to run to completion, so I think something is open somewhere(maybe LWP connections not shutting down after the object, perhaps?) or a file limit(although I did check ulimit) This works: use REST::Neo4p; REST::Neo4p->connect( "http://127.0.0.1:7575" ); my $statement = 'START x=node(0) RETURN x'; my $query = REST::Neo4p::Query->new($statement); foreach my $i (0..5000) { warn "On $i"; $query->execute(); }
Subject: Found the limit problem and a workaround
From: joseph.guhlin [...] gmail.com
The 1,019 limit is due to Mac OS X's max number of files open limit. However, the problem is that the object is never garbage collected due to a circular reference created by the _iterator function. The workaround I am now using is: delete($query->{_iterator}) when I am finished with the query object and this seems to be working now. Will post if I run into any trouble from this method. Best, --Joseph
Hi Joseph, I think this is finally fixed with v0.2253. A query should now delete its own {_iterator} when it finishes or is gc'd. Please give it a try if you are interested- thanks MAJ