Skip Menu |

This queue is for tickets about the DBD-mysql CPAN distribution.

Report information
The Basics
Id: 102919
Status: resolved
Priority: 0/
Queue: DBD-mysql

People
Owner: Nobody in particular
Requestors: ether [...] cpan.org
Cc: SREZIC [...] cpan.org
AdminCc:

Bug Information
Severity: Wishlist
Broken in: 4.031
Fixed in: (no value)



Subject: support for RowCacheSize?
$dbh->{RowCacheSize} always returns undef, even after I explicitly set it. This indicates that it is not supported. Is it possible to add support? When I call ->execute on a prepared SELECT, the entire result is slurped into memory immediately, even if I desire to iterate over the results one row at a time (e.g. via fetchrow_arrayref). For large query results, this results in all available memory being consumed, which kills the process. The RowCacheSize attribute can make this better: https://metacpan.org/pod/DBI#RowCacheSize
Hi, Thanks for your request. Did you use this feature in another DBD driver by any chance? Would you require all options? These are the ones defined in DBI: 0 - Automatically determine a reasonable cache size for each C<SELECT> 1 - Disable the local row cache Show quoted text
>1 - Cache this many rows
<0 - Cache as many rows that will fit into this much memory for each C<SELECT>. -- Mike
On 2015-03-22 13:20:41, MICHIELB wrote: Show quoted text
> Hi, > > Thanks for your request. Did you use this feature in another DBD > driver by any chance? > > Would you require all options? These are the ones defined in DBI: > > 0 - Automatically determine a reasonable cache size for each C<SELECT> > 1 - Disable the local row cache
> > 1 - Cache this many rows
> <0 - Cache as many rows that will fit into this much memory for each > C<SELECT>.
The variant that I would use is <0: e.g. $dbh->{RowCacheSize} = -128e6; # buffer size = 128 MB BTW, this went by on irc #dbi over the weekend, as a possible workaround to running out of memory during large queries: 06:59 <@timbunce> ether: re DBD::mysql memory usage on big results, see the mysql_use_result attribute. 06:59 < Sno> as said - timbunce knows almost everything about databases :) 07:00 <@timbunce> Being in this game a looong time helps. Well, until the rate I forget things exceeds the rate I learn them :) 07:02 < Sno> :) 07:04 <@timbunce> Oldest reference I can see is from 2003 :) http://perl.markmail.org/search/?q=mysql_use_result#query:mysql_use_result%20from%3A%22Tim%20Bunce%22%20order%3Adate-forward+page:1+mid:3xf7ix6bjrmqjnvs+state:results 07:08 <@timbunce> ether: just setting mysql_use_result will mean any locks are held longer while the results are streamed. If that's a problem you can try SQL_BUFFER_RESULT in the SQL http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sql_buffer_result 07:10 <@timbunce> (Claim to fame... I asked Monty, the mysql author, to add the SQL_BUFFER_RESULT feature for just this use-case.)
On 2015-03-23 13:47:48, ETHER wrote: Show quoted text
> On 2015-03-22 13:20:41, MICHIELB wrote:
> > Hi, > > > > Thanks for your request. Did you use this feature in another DBD > > driver by any chance? > > > > Would you require all options? These are the ones defined in DBI: > > > > 0 - Automatically determine a reasonable cache size for each > > C<SELECT> > > 1 - Disable the local row cache
> > > 1 - Cache this many rows
> > <0 - Cache as many rows that will fit into this much memory for each > > C<SELECT>.
> > The variant that I would use is <0: e.g. $dbh->{RowCacheSize} = > -128e6; # buffer size = 128 MB > > BTW, this went by on irc #dbi over the weekend, as a possible > workaround to running out of memory during large queries: > > 06:59 <@timbunce> ether: re DBD::mysql memory usage on big results, > see the mysql_use_result attribute. > 06:59 < Sno> as said - timbunce knows almost everything about > databases :) > 07:00 <@timbunce> Being in this game a looong time helps. Well, until > the rate I forget things exceeds the rate I learn them :) > 07:02 < Sno> :) > 07:04 <@timbunce> Oldest reference I can see is from 2003 :) > http://perl.markmail.org/search/?q=mysql_use_result#query:mysql_use_result%20from%3A%22Tim%20Bunce%22%20order%3Adate- > forward+page:1+mid:3xf7ix6bjrmqjnvs+state:results > 07:08 <@timbunce> ether: just setting mysql_use_result will mean any > locks are held longer while the results are streamed. If that's a > problem you can try SQL_BUFFER_RESULT in the SQL > http://dev.mysql.com/doc/refman/5.7/en/server- > system-variables.html#sysvar_sql_buffer_result > 07:10 <@timbunce> (Claim to fame... I asked Monty, the mysql author, > to add the SQL_BUFFER_RESULT feature for just this use-case.)
mysql_use_result is a possible workaround, but you're just moving the memory problem from client to server --- so you have to make sure that your mysql server has enough memory. Another possible workaround is to use the MySQL HANDLER syntax <http://dev.mysql.com/doc/refman/5.0/en/handler.html>. This would be a cursor-like solution without any memory consumption, but it only works if you just want to scan over a table (possibly using a simple filter) and you don't need a consistent snapshot of the table (changes would immediately be reflected while scanning through the table).