Skip Menu |

This queue is for tickets about the DBD-SQLite CPAN distribution.

Report information
The Basics
Id: 8564
Status: resolved
Priority: 0/
Queue: DBD-SQLite

People
Owner: Nobody in particular
Requestors: Scott.Jackson [...] pnl.gov
Cc:
AdminCc:

Bug Information
Severity: Important
Broken in: 1.07
Fixed in: (no value)



Subject: Seg Fault when accessing column name metadata $sth-{NAME}
I am getting a Segmentation Fault when trying to access statement column name metadata after a sqlite query via DBD::SQLite. I am using DBD::SQLite 1.07 (which uses SQLite v3.0.8) and Perl 5.8.0 built for i686-linux and i386-linux-thread-multi on Red Hat Linux 7.1 and 9.0. I am trying to use DBD::SQLite as part of an open source project that will provide job accounting data to some of the nations largest supercomputing sites. As I use the Perl DBI to access the column names ($sth->{NAME}) after a SQLite query, a seg fault occurs after some number of accesses to the column names. For example, if I query a table with 12 columns and ask for the column names, it may segfault after successfully giving me the first five. If I put a warn statement in a random place in the dbdimp.c code, the number of columns successfully returned may change to 10 or 0 or even work for a number of queries to die in a later query's metadata lookup. More specifically, here is the backtrace from a recent segfault on a dual proc i386 system: Trying to read metadata (column names) via Perl DBI (DBD::SQLite) results in a Segmentation Fault in sqlite4ValueText. gdb backtrace shows corrupt pVal: 0x40447c35 in sqlite3ValueText (pVal=0x1d9, enc=1 '\001') at vdbemem.c:672 672 warn("flags: %d", pVal->flags); (gdb) where #0 0x40447c35 in sqlite3ValueText (pVal=0x1d9, enc=1 '\001') at vdbemem.c:672 #1 0x40442d20 in sqlite3_value_text (pVal=0x1d9) at vdbeapi.c:47 #2 0x404437ce in columnName (pStmt=0x8cf8388, N=7, xFunc=0x40442d00 <sqlite3_value_text>, useType=0) at vdbeapi.c:375 #3 0x40443803 in sqlite3_column_name (pStmt=0x8cf8388, N=7) at vdbeapi.c:384 #4 0x404181d2 in sqlite_st_FETCH_attrib (sth=0x846d740, imp_sth=0x8cde4b8, keysv=0x8ceadf4) at dbdimp.c:629 #5 0x40406afd in XS_DBD__SQLite__st_FETCH_attrib (cv=0x8cd714c) at SQLite.xsi:690 #6 0x403905c0 in XS_DBI_dispatch () from /usr/local/gold/lib/perl5/i686-linux/auto/DBI/DBI.so #7 0x080a3c13 in Perl_pp_entersub () #8 0x0809de32 in Perl_runops_standard () #9 0x0805fc90 in S_call_body () #10 0x0805f85f in Perl_call_sv () #11 0x080969be in S_magic_methpack () #12 0x08096ab8 in Perl_magic_getpack () #13 0x08094fe4 in Perl_mg_get () #14 0x080a87b0 in Perl_sv_setsv_flags () #15 0x080ac31b in Perl_sv_mortalcopy () #16 0x080a1acb in Perl_pp_helem () #17 0x0809de32 in Perl_runops_standard () #18 0x0805f3fb in S_run_body () #19 0x0805f18a in perl_run () #20 0x0805cbc3 in main () #21 0x40081306 in __libc_start_main (main=0x805cb40 <main>, argc=4, ubp_av=0xbffffb14, init=0x805bce8 <_init>, fini=0x80f5a70 <_fini>, rtld_fini=0x4000d2dc <_dl_fini>, stack_end=0xbffffb0c) at ../sysdeps/generic/libc-start.c:129 Running valgrind on the same code (with same version of SQLite) (incidentally on a different system) reveals that XS_DBI_dispatch is referencing a block of data that has already been freed. ==3209== Invalid read of size 4 ==3209== at 0x473DC5B2: sqlite3_column_count (vdbeapi.c:289) ==3209== by 0x473DC88C: columnName (vdbeapi.c:366) ==3209== by 0x473DC909: sqlite3_column_name (vdbeapi.c:383) ==3209== by 0x473B26C8: sqlite_st_FETCH_attrib (dbdimp.c:628) ==3209== by 0x473A129A: XS_DBD__SQLite__st_FETCH_attrib (SQLite.xsi:690) ==3209== by 0x440FAEB0: XS_DBI_dispatch (DBI.xs:2663) ==3209== by 0x402B04E4: Perl_pp_entersub (pp_hot.c:2781) ==3209== by 0x40293C99: Perl_runops_debug (dump.c:1414) ==3209== by 0x40248758: S_call_body (perl.c:2069) ==3209== by 0x40248616: Perl_call_sv (perl.c:1948) ==3209== by 0x40247DE5: Perl_call_method (perl.c:1881) ==3209== by 0x4029C5BB: S_magic_methcall (mg.c:1328) ==3209== by 0x4029C734: S_magic_methpack (mg.c:1340) ==3209== by 0x4029C929: Perl_magic_getpack (mg.c:1353) ==3209== by 0x4029A336: Perl_mg_get (mg.c:128) ==3209== Address 0x45C58864 is 568 bytes inside a block of size 592 free'd ==3209== at 0x400296BF: free (vg_replace_malloc.c:220) ==3209== by 0x473D4030: sqlite3FreeX (util.c:283) ==3209== by 0x473DF158: sqlite3VdbeDelete (vdbeaux.c:1411) ==3209== by 0x473DEF2E: sqlite3VdbeFinalize (vdbeaux.c:1349) ==3209== by 0x473BF696: sqlite3_finalize (main.c:1223) ==3209== by 0x473B202E: sqlite_st_finish (dbdimp.c:523) ==3209== by 0x473B1B18: sqlite_st_fetch (dbdimp.c:457) ==3209== by 0x4739B78F: dbdxst_fetchall_arrayref (Driver_xst.h:97) ==3209== by 0x473A0255: XS_DBD__SQLite__st_fetchall_arrayref (SQLite.xsi:618)==3209== by 0x440FAEB0: XS_DBI_dispatch (DBI.xs:2663) ==3209== by 0x402B04E4: Perl_pp_entersub (pp_hot.c:2781) ==3209== by 0x40293C99: Perl_runops_debug (dump.c:1414) ==3209== by 0x402479AA: S_run_body (perl.c:1705) ==3209== by 0x40247634: perl_run (perl.c:1624) ==3209== by 0x80493A2: main (in /usr/bin/perl) ==3209== Valgrind revealed numerous references to memory that had already been freed. Thank you for any help you can give me on this! Scott Jackson
[guest - Mon Nov 22 13:36:07 2004]: Show quoted text
> gdb backtrace shows corrupt pVal: > > 0x40447c35 in sqlite3ValueText (pVal=0x1d9, enc=1 '\001') at > vdbemem.c:672 > 672 warn("flags: %d", pVal->flags); > (gdb) where > #0 0x40447c35 in sqlite3ValueText (pVal=0x1d9, enc=1 '\001') at > vdbemem.c:672 > #1 0x40442d20 in sqlite3_value_text (pVal=0x1d9) at vdbeapi.c:47
I apologize -- I left in some warn statements used while debugging the error which might cause you some confusion. The first few lines in the gdb backtrace should actually say: 0x40447bf3 in sqlite3ValueText (pVal=0x259, enc=1 '\001') at vdbemem.c:669 674 if( pVal->flags&MEM_Null ){ (gdb) where #0 0x40447bf3 in sqlite3ValueText (pVal=0x259, enc=1 '\001') at vdbemem.c:669 #1 0x40442d20 in sqlite3_value_text (pVal=0x259) at vdbeapi.c:47 Scott
From: david_dick [...] iprimus.com.au
Scott i've been attempting and failing to replicate your error with the attached perl snippet. Could you confirm that this causes the segfault on your system or provide a fragment that does cause the segfault? Uru -Dave
#! /usr/bin/perl use DBI(); use warnings; use strict; eval { my $dbh = DBI->connect("dbi:SQLite:dbname=temp.db", '' , '' , { RaiseError => 1 , PrintError => 1 , AutoCommit => 0 }) ; $dbh->do('CREATE TABLE test (id INTEGER, b CHAR, c VARCHAR, d INTEGER, e INTEGER, f INTEGER, g INTEGER, h INTEGER, i CHAR, j CHAR, k CHAR, l CHAR)') ; # $dbh->do('INSERT INTO test (id, b, c, d, e, f, g, h, i, j, k, l) VALUES (1,2,3,4,5,6,7,8,9,10,11,12)') ; my ($sth) = $dbh->prepare('SELECT * FROM test'); $sth->execute(); my ($key); foreach $key (@{$sth->{NAME}}) { print "Key:$key\n"; } for(my $i = 0; $i < (scalar @{$sth->{NAME}}); $i++) { print "Key:" . $sth->{NAME}->[$i] . "\n"; } $sth->finish(); $dbh->rollback(); $dbh->disconnect(); }; if ($@) { chomp($@); print "$@\n"; exit(1); } print "Success\n";
From: scottmo
Dave, I very much appreciate the effort you made to try to reproduce this problem! I have been able to reproduce the segFault by making the following modifications to your testcase. Remove the parens from around your $sth variable declaration: my $sth = $dbh->prepare('SELECT * FROM test'); After your $sth->execute(); line add the lines: my $data_array_ref = $sth->fetchall_arrayref(); my $cols = $sth->{NUM_OF_FIELDS}; Running the testcase after doing this gave me my segfault. This is a very finicky segFault, as soon as I added one other print statement, the segFault went away (it would have shown up elsewhere if I continued to make calls in this manner). When I saw that you apparently deliberately parenthesized the sth I figured maybe I was using DBI wrong, but the perldocs for my version of DBI do not show the $sth being parenthesized in the manner that you did in your script. However, this appears to be an important component of the segFault. $ perl segFaultTest.pl Key:id Key:b Key:c Key:d Key:e Key:f Key:g Key:h Key:i Key:j Key:k Key:l Key:id Segmentation fault (core dumped) $ I have attached the test case that produces a segFault. Thanks for your help, Scott Jackson
#! /usr/bin/perl use lib qw (/usr/local/gold/lib/perl5); use DBI(); use warnings; use strict; eval { my $dbh = DBI->connect("dbi:SQLite:dbname=temp.db", '' , '' , { RaiseError => 1 , PrintError => 1 , AutoCommit => 0 }) ; $dbh->do('CREATE TABLE test (id INTEGER, b CHAR, c VARCHAR, d INTEGER, e INTEGER, f INTEGER, g INTEGER, h INTEGER, i CHAR, j CHAR, k CHAR, l CHAR)') ; $dbh->do('INSERT INTO test (id, b, c, d, e, f, g, h, i, j, k, l) VALUES (1,2,3,4,5,6,7,8,9,10,11,12)') ; my $sth = $dbh->prepare('SELECT * FROM test'); $sth->execute(); my $data_array_ref = $sth->fetchall_arrayref(); my $cols = $sth->{NUM_OF_FIELDS}; my ($key); foreach $key (@{$sth->{NAME}}) { print "Key:$key\n"; } for(my $i = 0; $i < (scalar @{$sth->{NAME}}); $i++) { print "Key:" . $sth->{NAME}->[$i] . "\n"; } $sth->finish(); $dbh->rollback(); $dbh->disconnect(); }; if ($@) { chomp($@); print "$@\n"; exit(1); } print "Success\n";
G'day Scott, I've tried numerous times to attempt to replicate this bug with various versions of perl and linux. However, I cannot get hold of Red Hat 7.1, so possibly this is the issue. For the record, Red Hat 7.3 with perl 5.6.1 and Red Hat 8.0 with perl 5.8 seem to work fine. Sorry.
From: scottmo
Dave, Thanks again for your efforts. It is not entirely unexpected that you were not able to reproduce the problem since it is a highly fickle problem that seems to move around (appear/disappear, show up in different places) just by adding or deleting comments. In general, it has been very difficult to produce a small testcase that produces the problem reliably on other boxes, although I have been able to reproduce the problem on all architectures and os levels I have available with a bit of tweaking. I believe I understand the problem better now. The problem only manifests itself when using fetchrow_arrayref (not for fetchall_array, fetchrow_arrayref, fetchall_hashref, etc). Although this is not explained in the perldoc for DBI, what I have been able to gather is that fetchall_arrayref is unique in that after it fetches all the data, it automatically calls finish() on the statement handle leaving it inactive. At this point it could be hazardous to try to use the statement handle to access metadata information like NUM_OF_FIELDS, or NAME or TYPE since the memory for these datastructures has been freed. This agrees with what I saw with valgrind -- any time I tried to access $sth->{NAME} or $sth->{TYPE} or $sth->{NUM_OF_FIELDS} after doing the fetchall_arrayref, valgrind warned that these were invalid references to memory that had already been freed! I have seen that mysql and other databases have suffered from the same problem. It appears that postgresql and oracle, at least, have corrected the problem in their DBD code such that it does not create a segfault situation. I have been able to work around the situation by grabbing the pertinent metadata items before calling fetchall_arrayref. It requires some extra array copies but is not debilitating. $sth = $dbh->prepare($sql); $sth->execute(); my %results = (); $results{rows} = $sth->rows; $results{cols} = $sth->{NUM_OF_FIELDS}; $results{names} = [@{$sth->{NAME}}]; $results{data} = $sth->fetchall_arrayref(); Although I may be able to agree that this is not necessarily a bug in your code, since you are dutifully deactivating the statement handle after the fetch, perhaps you might still want to make some modifications that will prevent a segfault situation from occurring. I think a user should not be able to create a segmentation fault as a result of routine use of the api that is perfectly in accordance with the documented usage. Thanks for your efforts! Sincerly, Scott Jackson Pacific Northwest National Laboratory
added a croak() when you try and do this on an inactive sth. Will be in 1.13.