Subject: | New feature: pg_canonical_X |
Date: | Thu, 03 Sep 2015 22:40:23 +0300 |
To: | bug-DBD-Pg [...] rt.cpan.org |
From: | Warstone [...] list.ru <warstone [...] list.ru> |
I'd like to introduce 2 new methods:
$sth->pg_canonical_names
$sth->pg_canonical_ids
This methods gives you information about "where the data came from". It uses PQftable / PQftablecol from http://www.postgresql.org/docs/9.1/static/libpq-exec.html#LIBPQ-EXEC-SELECT-INFO
and very handly for ORMs (You can construct ORM with your data types from any raw SQL query, according to your meta).
In example:
You have BYTEA "data" column in "tmp_table" table. Somewhere in your apps metadata you know that this is gzipped JSON. And when you use plain SQL like...
SELECT u.id, t.data AS json FROM "user" u
LEFT JOIN tmp_table t ON t.user = u.id WHERE u.id = 1234567;
this method (pg_canonical_names) will tell you that you have 2 columns (schema, table, column):
public.user.id
public.tmp_table.data
and your ORM can construct JSON from unpacked data here,
This methods supported by libpq and will be very usefull.
pg_canonical_ids can be used if you know Oids for tables and column positions (PostgreSQL internals) and it's fast,
pg_canonical_names can be used for retreeving human readable names, buy it need to go to server for every column you have in query (so use it only for debug)