Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: \import #267

Open
kenshaw opened this issue Jun 18, 2021 · 4 comments
Open

proposal: \import #267

kenshaw opened this issue Jun 18, 2021 · 4 comments

Comments

@kenshaw
Copy link
Member

kenshaw commented Jun 18, 2021

Currently there is no easy way to import data into a database. While it is fairly straight forward to use \copy with a csvq database, a more generic way of doing bulk imports from a file on disk should be supported, as similar functionality exists in almost every database's native command line client. usql should match this native functionality with its own \import command.

Similarly, since the csvq driver may not be available, a separate \import function should not rely on its presence and should instead use Go's standard library. Since it's already possible to export queries to disk in CSV and JSON formats, via \o and \pset format <TYPE>, a native usql \import should also support these formats.

As such, we propose a \import function similar to the following:

\import [-<TYPE>] <TABLE> [<FILE>]

Where TABLE is the name of the table to import to, TYPE is either csv, json or some other format to be supported in the future, and FILE is a path to a file on disk.

When FILE is not specified, then a file named TABLE.TYPE will be looked for in the current working directory. If TYPE is also not specified, then it will look for files with extensions csv and then json. If a corresponding file is not found, then an error would be returned.

If TYPE is not specified, then the command will attempt to first detect the import file's type via its extension, and failing that then via Go's http.DetectContentType.

The \import command would then process the data and prepare an INSERT statement for each row found in the csv or json. It is a requirement that CSV files have a header containing the column names, and it is expected that JSON data have the same style output as generated by the tblfmt's JSON encoder.

As such, with a properly implemented \import command, the following would be possible:

pg:booktest@localhost/booktest=> \pset format json
Output format is json.
pg:booktest@localhost/booktest=> select * from authors;
[{"author_id":1,"name":"Unknown Master"}]
pg:booktest@localhost/booktest=> select * from books;
[{"book_id":1,"author_id":1,"isbn":"1","book_type":"FICTION","title":"my book title","year":2016,"available":"2021-06-17T23:15:04.682659Z","tags":"{}"},{"book_id":2,"author_id":1,"isbn":"2","book_type":"FICTION","title":"changed second title","year":2016,"available":"2021-06-17T23:15:04.682659Z","tags":"{cool,disastor}"},{"book_id":3,"author_id":1,"isbn":"3","book_type":"FICTION","title":"the third book","year":2001,"available":"2021-06-17T23:15:04.682659Z","tags":"{cool}"}]
pg:booktest@localhost/booktest=> \o export.json
pg:booktest@localhost/booktest=> \g
pg:booktest@localhost/booktest=> \o
pg:booktest@localhost/booktest=> \import books export.json
@nineinchnick
Copy link
Member

\import might be a bit confusing with \i, how about \load?

@davidfetter
Copy link

What you're thinking of as \import (along with the corresponding \export) is the \copy functionality, as far as said functionality actually goes.

@nineinchnick
Copy link
Member

This issue was created before we added \copy.

@davidfetter
Copy link

Then perhaps this one's best closed out so people don't think it's still a live issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants