Provided by: sqlite-utils_3.35.2-1_all bug

NAME

       sqlite-utils - sqlite-utils documentation

CLI TOOL AND PYTHON LIBRARY FOR MANIPULATING SQLITE DATABASES

       This library and command-line utility helps create SQLite databases from an existing collection of data.

       Most  of  the  functionality is available as either a Python API or through the sqlite-utils command-line
       tool.

       sqlite-utils is not intended to be a full ORM: the focus is utility helpers to make creating the  initial
       database and populating it with data as productive as possible.

       It is designed as a useful complement to Datasette.

       Cleaning  data  with  sqlite-utils  and  Datasette provides a tutorial introduction (and accompanying ten
       minute video) about using this tool.

   Contents
   Installation
       sqlite-utils is tested on Linux, macOS and Windows.

   Using Homebrew
       The sqlite-utils command-line tool can be installed on macOS using Homebrew:

          brew install sqlite-utils

       If you have it installed and want to upgrade to the most recent release, you can run:

          brew upgrade sqlite-utils

       Then run sqlite-utils --version to confirm the installed version.

   Using pip
       The sqlite-utils package on PyPI includes both the  sqlite_utils  Python  library  and  the  sqlite-utils
       command-line tool. You can install them using pip like so:

          pip install sqlite-utils

   Using pipx
       pipx  is  a  tool for installing Python command-line applications in their own isolated environments. You
       can use pipx to install the sqlite-utils command-line tool like this:

          pipx install sqlite-utils

   Alternatives to sqlite3
       By default, sqlite-utils uses the sqlite3 package bundled with the Python standard library.

       Depending on your operating system, this may come with some limitations.

       On  some  platforms  the  ability  to  load  additional  extensions  (via   conn.load_extension(...)   or
       --load-extension=/path/to/extension) may be disabled.

       You  may also see the error sqlite3.OperationalError: table sqlite_master may not be modified when trying
       to alter an existing table.

       You can work around these limitations by  installing  either  the  pysqlite3  package  or  the  sqlean.py
       package,  both  of  which provide drop-in replacements for the standard library sqlite3 module but with a
       recent version of SQLite and full support for loading extensions.

       To install sqlean.py (which has compiled binary  wheels  available  for  all  major  platforms)  run  the
       following:

          sqlite-utils install sqlean.py

       pysqlite3 and sqlean.py do not provide implementations of the .iterdump() method. To use that method (see
       Dumping  the  database  to  SQL) or the sqlite-utils dump command you should also install the sqlite-dump
       package:

          sqlite-utils install sqlite-dump

   Setting up shell completion
       You can configure shell tab completion for the sqlite-utils command using these commands.

       For bash:

          eval "$(_SQLITE_UTILS_COMPLETE=bash_source sqlite-utils)"

       For zsh:

          eval "$(_SQLITE_UTILS_COMPLETE=zsh_source sqlite-utils)"

       Add this code to ~/.zshrc or ~/.bashrc to automatically run it when you start a new shell.

       See the Click documentation for more details.

   sqlite-utils command-line tool
       The sqlite-utils command-line tool can be used to manipulate SQLite databases in a  number  of  different
       ways.

       Once  installed  the  tool  should  be  available  as  sqlite-utils.  It  can also be run using python -m
       sqlite_utils.

       • Running SQL queriesReturning JSONNewline-delimited JSONJSON arraysBinary data in JSONNested JSON valuesReturning CSV or TSVTable-formatted outputReturning raw data, such as binary contentUsing named parametersUPDATE, INSERT and DELETEDefining custom SQL functionsSQLite extensionsAttaching additional databasesQuerying data directly using an in-memory databaseRunning queries directly against CSV or JSONExplicitly specifying the formatJoining in-memory data against existing databases using --attach--schema, --analyze, --dump and --saveReturning all rows in a tableListing tablesListing viewsListing indexesListing triggersShowing the schemaAnalyzing tablesSaving the analyzed table detailsCreating an empty databaseInserting JSON dataInserting binary dataInserting newline-delimited JSONFlattening nested JSON objectsInserting CSV or TSV dataAlternative delimiters and quote charactersCSV files without a header rowInserting unstructured data with --lines and --textApplying conversions while inserting data--convert with --lines--convert with --textInsert-replacing dataUpserting dataExecuting SQL in bulkInserting data from filesConverting data in columnsImporting additional modulesUsing the debuggerDefining a convert() functionsqlite-utils convert recipesSaving the result to a different columnConverting a column into multiple columnsCreating tablesRenaming a tableDuplicating tablesDropping tablesTransforming tablesAdding a primary key to a rowid tableExtracting columns into a separate tableCreating viewsDropping viewsAdding columnsAdding columns automatically on insert/updateAdding foreign key constraintsAdding multiple foreign keys at onceAdding indexes for all foreign keysSetting defaults and not null constraintsCreating indexesConfiguring full-text searchExecuting searchesEnabling cached countsOptimizing index usage with ANALYZEVacuumOptimizeWAL modeDumping the database to SQLLoading SQLite extensionsSpatiaLite helpersAdding spatial indexesInstalling packagesUninstalling packagesExperimental TUI

   Running SQL queries
       The sqlite-utils query command lets you run queries directly against a SQLite database file. This is  the
       default subcommand, so the following two examples work the same way:

          sqlite-utils query dogs.db "select * from dogs"

          sqlite-utils dogs.db "select * from dogs"

       NOTE:
          In Python: db.query()  CLI reference: sqlite-utils query

   Returning JSON
       The default format returned for queries is JSON:

          sqlite-utils dogs.db "select * from dogs"

          [{"id": 1, "age": 4, "name": "Cleo"},
           {"id": 2, "age": 2, "name": "Pancakes"}]

   Newline-delimited JSON
       Use --nl to get back newline-delimited JSON objects:

          sqlite-utils dogs.db "select * from dogs" --nl

          {"id": 1, "age": 4, "name": "Cleo"}
          {"id": 2, "age": 2, "name": "Pancakes"}

   JSON arrays
       You can use --arrays to request arrays instead of objects:

          sqlite-utils dogs.db "select * from dogs" --arrays

          [[1, 4, "Cleo"],
           [2, 2, "Pancakes"]]

       You can also combine --arrays and --nl:

          sqlite-utils dogs.db "select * from dogs" --arrays --nl

          [1, 4, "Cleo"]
          [2, 2, "Pancakes"]

       If you want to pretty-print the output further, you can pipe it through python -mjson.tool:

          sqlite-utils dogs.db "select * from dogs" | python -mjson.tool

          [
              {
                  "id": 1,
                  "age": 4,
                  "name": "Cleo"
              },
              {
                  "id": 2,
                  "age": 2,
                  "name": "Pancakes"
              }
          ]

   Binary data in JSON
       Binary  strings  are  not  valid  JSON, so BLOB columns containing binary data will be returned as a JSON
       object containing base64 encoded data, that looks like this:

          sqlite-utils dogs.db "select name, content from images" | python -mjson.tool

          [
              {
                  "name": "transparent.gif",
                  "content": {
                      "$base64": true,
                      "encoded": "R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"
                  }
              }
          ]

   Nested JSON values
       If one of your columns contains JSON, by default it will be returned as an escaped string:

          sqlite-utils dogs.db "select * from dogs" | python -mjson.tool

          [
              {
                  "id": 1,
                  "name": "Cleo",
                  "friends": "[{\"name\": \"Pancakes\"}, {\"name\": \"Bailey\"}]"
              }
          ]

       You can use the --json-cols option to automatically detect these JSON columns and output them  as  nested
       JSON data:

          sqlite-utils dogs.db "select * from dogs" --json-cols | python -mjson.tool

          [
              {
                  "id": 1,
                  "name": "Cleo",
                  "friends": [
                      {
                          "name": "Pancakes"
                      },
                      {
                          "name": "Bailey"
                      }
                  ]
              }
          ]

   Returning CSV or TSV
       You can use the --csv option to return results as CSV:

          sqlite-utils dogs.db "select * from dogs" --csv

          id,age,name
          1,4,Cleo
          2,2,Pancakes

       This  will  default  to  including  the  column  names  as  a  header  row.  To  exclude the headers, use
       --no-headers:

          sqlite-utils dogs.db "select * from dogs" --csv --no-headers

          1,4,Cleo
          2,2,Pancakes

       Use --tsv instead of --csv to get back tab-separated values:

          sqlite-utils dogs.db "select * from dogs" --tsv

          id  age     name
          1   4       Cleo
          2   2       Pancakes

   Table-formatted output
       You can use the --table option (or -t shortcut) to output query results as a table:

          sqlite-utils dogs.db "select * from dogs" --table

            id    age  name
          ----  -----  --------
             1      4  Cleo
             2      2  Pancakes

       You can use the --fmt option to specify different table formats, for example rst for reStructuredText:

          sqlite-utils dogs.db "select * from dogs" --fmt rst

          ====  =====  ========
            id    age  name
          ====  =====  ========
             1      4  Cleo
             2      2  Pancakes
          ====  =====  ========

       Available --fmt options are:

       • asciidocdouble_griddouble_outlinefancy_gridfancy_outlinegithubgridheavy_gridheavy_outlinehtmljiralatexlatex_booktabslatex_longtablelatex_rawmediawikimixed_gridmixed_outlinemoinmoinorgtbloutlinepipeplainprestoprettypsqlrounded_gridrounded_outlinerstsimplesimple_gridsimple_outlinetextiletsvunsafehtmlyoutrack

       This list can also be found by running sqlite-utils query --help.

   Returning raw data, such as binary content
       If your table contains binary data in a BLOB you can use the --raw  option  to  output  specific  columns
       directly to standard out.

       For  example,  to  retrieve  a  binary  image  from  a BLOB column and store it in a file you can use the
       following:

          sqlite-utils photos.db "select contents from photos where id=1" --raw > myphoto.jpg

       To return the first column of each result as raw data, separated by newlines, use --raw-lines:

          sqlite-utils photos.db "select caption from photos" --raw-lines > captions.txt

   Using named parameters
       You can pass named parameters to the query using -p:

          sqlite-utils query dogs.db "select :num * :num2" -p num 5 -p num2 6

          [{":num * :num2": 30}]

       These will be correctly quoted and escaped in the SQL query, providing a safe way to combine other values
       with SQL.

   UPDATE, INSERT and DELETE
       If you execute an UPDATE, INSERT or DELETE query the command will return the number of affected rows:

          sqlite-utils dogs.db "update dogs set age = 5 where name = 'Cleo'"

          [{"rows_affected": 1}]

   Defining custom SQL functions
       You can use the --functions option to pass a block of Python code that defines additional functions which
       can then be called by your SQL query.

       This example defines a function which extracts the domain from a URL:

          sqlite-utils query sites.db "select url, domain(url) from urls" --functions '
          from urllib.parse import urlparse

          def domain(url):
              return urlparse(url).netloc
          '

       Every callable object defined in the block will be registered as a SQL function with the same name,  with
       the exception of functions with names that begin with an underscore.

   SQLite extensions
       You can load SQLite extension modules using the --load-extension option, see Loading SQLite extensions.

          sqlite-utils dogs.db "select spatialite_version()" --load-extension=spatialite

          [{"spatialite_version()": "4.3.0a"}]

   Attaching additional databases
       SQLite  supports  cross-database  SQL  queries, which can join data from tables in more than one database
       file.

       You can attach one or more additional databases using the --attach option, providing an alias to use  for
       that database and the path to the SQLite file on disk.

       This  example  attaches  the  books.db database under the alias books and then runs a query that combines
       data from that database with the default dogs.db database:

          sqlite-utils dogs.db --attach books books.db \
             'select * from sqlite_master union all select * from books.sqlite_master'

       NOTE:
          In Python: db.attach()

   Querying data directly using an in-memory database
       The sqlite-utils memory command works similar to sqlite-utils query, but allows you  to  execute  queries
       against an in-memory database.

       You  can  also pass this command CSV or JSON files which will be loaded into a temporary in-memory table,
       allowing you to execute SQL against that data without a separate step to first convert it to SQLite.

       Without any extra arguments, this command executes SQL against the in-memory database directly:

          sqlite-utils memory 'select sqlite_version()'

          [{"sqlite_version()": "3.35.5"}]

       It takes all of the same output formatting options as sqlite-utils query: --csv and --csv and --table and
       --nl:

          sqlite-utils memory 'select sqlite_version()' --csv

          sqlite_version()
          3.35.5

          sqlite-utils memory 'select sqlite_version()' --fmt grid

          +--------------------+
          | sqlite_version()   |
          +====================+
          | 3.35.5             |
          +--------------------+

   Running queries directly against CSV or JSON
       If you have data in CSV or JSON format you can load it into an in-memory SQLite database and run  queries
       against it directly in a single command using sqlite-utils memory like this:

          sqlite-utils memory data.csv "select * from data"

       You can pass multiple files to the command if you want to run joins between data from different files:

          sqlite-utils memory one.csv two.json \
            "select * from one join two on one.id = two.other_id"

       If  your  data  is  JSON  it  should be the same format supported by the sqlite-utils insert command - so
       either a single JSON object (treated as a single row) or a list of JSON objects.

       CSV data can be comma- or tab- delimited.

       The in-memory tables will be named after the files without  their  extensions.  The  tool  also  sets  up
       aliases  for  those  tables (using SQL views) as t1, t2 and so on, or you can use the alias t to refer to
       the first table:

          sqlite-utils memory example.csv "select * from t"

       If two files have the same name they will be assigned a numeric suffix:

          sqlite-utils memory foo/data.csv bar/data.csv "select * from data_2"

       To read from standard input, use either - or stdin as the filename - then use stdin or t  or  t1  as  the
       table name:

          cat example.csv | sqlite-utils memory - "select * from stdin"

       Incoming  CSV data will be assumed to use utf-8. If your data uses a different character encoding you can
       specify that with --encoding:

          cat example.csv | sqlite-utils memory - "select * from stdin" --encoding=latin-1

       If you are joining across multiple CSV files they must all use the same encoding.

       Column types  will  be  automatically  detected  in  CSV  or  TSV  data,  using  the  same  mechanism  as
       --detect-types  described  in  Inserting  CSV  or  TSV data. You can pass the --no-detect-types option to
       disable this automatic type detection and treat all CSV and TSV columns as TEXT.

   Explicitly specifying the format
       By default, sqlite-utils memory will attempt to detect the  incoming  data  format  (JSON,  TSV  or  CSV)
       automatically.

       You  can  instead  specify an explicit format by adding a :csv, :tsv, :json or :nl (for newline-delimited
       JSON) suffix to the filename. For example:

          sqlite-utils memory one.dat:csv two.dat:nl \
            "select * from one union select * from two"

       Here the contents of one.dat will be treated as CSV and the  contents  of  two.dat  will  be  treated  as
       newline-delimited JSON.

       To  explicitly  specify the format for data piped into the tool on standard input, use stdin:format - for
       example:

          cat one.dat | sqlite-utils memory stdin:csv "select * from stdin"

   Joining in-memory data against existing databases using --attach
       The attach option can be used to attach database  files  to  the  in-memory  connection,  enabling  joins
       between in-memory data loaded from a file and tables in existing SQLite database files. An example:

          echo "id\n1\n3\n5" | sqlite-utils memory - --attach trees trees.db \
            "select * from trees.trees where rowid in (select id from stdin)"

       Here the --attach trees trees.db option makes the trees.db database available with an alias of trees.

       select * from trees.trees where ... can then query the trees table in that database.

       The  CSV  data  that  was  piped  into the script is available in the stdin table, so  ... where rowid in
       (select id from stdin) can be used to return rows from the trees table that match IDs that were piped  in
       as CSV content.

   --schema, --analyze, --dump and --save
       To see the in-memory database schema that would be used for a file or for multiple files, use --schema:

          sqlite-utils memory dogs.csv --schema

          CREATE TABLE [dogs] (
              [id] INTEGER,
              [age] INTEGER,
              [name] TEXT
          );
          CREATE VIEW t1 AS select * from [dogs];
          CREATE VIEW t AS select * from [dogs];

       You can run the equivalent of the analyze-tables command using --analyze:

          sqlite-utils memory dogs.csv --analyze

          dogs.id: (1/3)

            Total rows: 2
            Null rows: 0
            Blank rows: 0

            Distinct values: 2

          dogs.name: (2/3)

            Total rows: 2
            Null rows: 0
            Blank rows: 0

            Distinct values: 2

          dogs.age: (3/3)

            Total rows: 2
            Null rows: 0
            Blank rows: 0

            Distinct values: 2

       You  can  output  SQL  that  will  both  create  the tables and insert the full data used to populate the
       in-memory database using --dump:

          sqlite-utils memory dogs.csv --dump

          BEGIN TRANSACTION;
          CREATE TABLE [dogs] (
              [id] INTEGER,
              [age] INTEGER,
              [name] TEXT
          );
          INSERT INTO "dogs" VALUES('1','4','Cleo');
          INSERT INTO "dogs" VALUES('2','2','Pancakes');
          CREATE VIEW t1 AS select * from [dogs];
          CREATE VIEW t AS select * from [dogs];
          COMMIT;

       Passing --save other.db will instead use that SQL to populate a new database file:

          sqlite-utils memory dogs.csv --save dogs.db

       These features are mainly intended as debugging tools - for much more finely  grained  control  over  how
       data is inserted into a SQLite database file see Inserting JSON data and Inserting CSV or TSV data.

   Returning all rows in a table
       You can return every row in a specified table using the rows command:

          sqlite-utils rows dogs.db dogs

          [{"id": 1, "age": 4, "name": "Cleo"},
           {"id": 2, "age": 2, "name": "Pancakes"}]

       This command accepts the same output options as query - so you can pass --nl, --csv, --tsv, --no-headers,
       --table and --fmt.

       You can use the -c option to specify a subset of columns to return:

          sqlite-utils rows dogs.db dogs -c age -c name

          [{"age": 4, "name": "Cleo"},
           {"age": 2, "name": "Pancakes"}]

       You can filter rows using a where clause with the --where option:

          sqlite-utils rows dogs.db dogs -c name --where 'name = "Cleo"'

          [{"name": "Cleo"}]

       Or pass named parameters using --where in combination with -p:

          sqlite-utils rows dogs.db dogs -c name --where 'name = :name' -p name Cleo

          [{"name": "Cleo"}]

       You can define a sort order using --order column or --order 'column desc'.

       Use  --limit N to only return the first N rows. Use --offset N to return rows starting from the specified
       offset.

       NOTE:
          In Python: table.rows  CLI reference: sqlite-utils rows

   Listing tables
       You can list the names of tables in a database using the tables command:

          sqlite-utils tables mydb.db

          [{"table": "dogs"},
           {"table": "cats"},
           {"table": "chickens"}]

       You can output this list in CSV using the --csv or --tsv options:

          sqlite-utils tables mydb.db --csv --no-headers

          dogs
          cats
          chickens

       If you just want to see the FTS4 tables, you can use --fts4 (or --fts5 for FTS5 tables):

          sqlite-utils tables docs.db --fts4

          [{"table": "docs_fts"}]

       Use --counts to include a count of the number of rows in each table:

          sqlite-utils tables mydb.db --counts

          [{"table": "dogs", "count": 12},
           {"table": "cats", "count": 332},
           {"table": "chickens", "count": 9}]

       Use --columns to include a list of columns in each table:

          sqlite-utils tables dogs.db --counts --columns

          [{"table": "Gosh", "count": 0, "columns": ["c1", "c2", "c3"]},
           {"table": "Gosh2", "count": 0, "columns": ["c1", "c2", "c3"]},
           {"table": "dogs", "count": 2, "columns": ["id", "age", "name"]}]

       Use --schema to include the schema of each table:

          sqlite-utils tables dogs.db --schema --table

          table    schema
          -------  -----------------------------------------------
          Gosh     CREATE TABLE Gosh (c1 text, c2 text, c3 text)
          Gosh2    CREATE TABLE Gosh2 (c1 text, c2 text, c3 text)
          dogs     CREATE TABLE [dogs] (
                     [id] INTEGER,
                     [age] INTEGER,
                     [name] TEXT)

       The --nl, --csv, --tsv, --table and --fmt options are also available.

       NOTE:
          In Python: db.tables or db.table_names()  CLI reference: sqlite-utils tables

   Listing views
       The views command shows any views defined in the database:

          sqlite-utils views sf-trees.db --table --counts --columns --schema

          view         count  columns               schema
          ---------  -------  --------------------  --------------------------------------------------------------
          demo_view   189144  ['qSpecies']          CREATE VIEW demo_view AS select qSpecies from Street_Tree_List
          hello            1  ['sqlite_version()']  CREATE VIEW hello as select sqlite_version()

       It takes the same options as the tables command:

       • --columns--schema--counts--nl--csv--tsv--table

       NOTE:
          In Python: db.views or db.view_names()  CLI reference: sqlite-utils views

   Listing indexes
       The indexes command lists any indexes configured for the database:

          sqlite-utils indexes covid.db --table

          table                             index_name                                                seqno    cid  name                 desc  coll      key
          --------------------------------  ------------------------------------------------------  -------  -----  -----------------  ------  ------  -----
          johns_hopkins_csse_daily_reports  idx_johns_hopkins_csse_daily_reports_combined_key             0     12  combined_key            0  BINARY      1
          johns_hopkins_csse_daily_reports  idx_johns_hopkins_csse_daily_reports_country_or_region        0      1  country_or_region       0  BINARY      1
          johns_hopkins_csse_daily_reports  idx_johns_hopkins_csse_daily_reports_province_or_state        0      2  province_or_state       0  BINARY      1
          johns_hopkins_csse_daily_reports  idx_johns_hopkins_csse_daily_reports_day                      0      0  day                     0  BINARY      1
          ny_times_us_counties              idx_ny_times_us_counties_date                                 0      0  date                    1  BINARY      1
          ny_times_us_counties              idx_ny_times_us_counties_fips                                 0      3  fips                    0  BINARY      1
          ny_times_us_counties              idx_ny_times_us_counties_county                               0      1  county                  0  BINARY      1
          ny_times_us_counties              idx_ny_times_us_counties_state                                0      2  state                   0  BINARY      1

       It shows indexes across all tables. To see indexes for specific tables, list those after the database:

          sqlite-utils indexes covid.db johns_hopkins_csse_daily_reports --table

       The command defaults to only showing the columns that are explicitly part of the index. To  also  include
       auxiliary columns use the --aux option - these columns will be listed with a key of 0.

       The command takes the same format options as the tables and views commands.

       NOTE:
          In Python: table.indexes  CLI reference: sqlite-utils indexes

   Listing triggers
       The triggers command shows any triggers configured for the database:

          sqlite-utils triggers global-power-plants.db --table

          name             table      sql
          ---------------  ---------  -----------------------------------------------------------------
          plants_insert    plants     CREATE TRIGGER [plants_insert] AFTER INSERT ON [plants]
                                      BEGIN
                                          INSERT OR REPLACE INTO [_counts]
                                          VALUES (
                                            'plants',
                                            COALESCE(
                                              (SELECT count FROM [_counts] WHERE [table] = 'plants'),
                                            0
                                            ) + 1
                                          );
                                      END

       It  defaults  to  showing  triggers  for all tables. To see triggers for one or more specific tables pass
       their names as arguments:

          sqlite-utils triggers global-power-plants.db plants

       The command takes the same format options as the tables and views commands.

       NOTE:
          In Python: table.triggers or db.triggers  CLI reference: sqlite-utils triggers

   Showing the schema
       The sqlite-utils schema command shows the full SQL schema for the database:

          sqlite-utils schema dogs.db

          CREATE TABLE "dogs" (
              [id] INTEGER PRIMARY KEY,
              [name] TEXT
          );

       This will show the schema for every table and index in the database.  To  view  the  schema  just  for  a
       specified subset of tables pass those as additional arguments:

          sqlite-utils schema dogs.db dogs chickens

       NOTE:
          In Python: table.schema or db.schema  CLI reference: sqlite-utils schema

   Analyzing tables
       When  working  with  a  new  database  it  can  be  useful  to  get an idea of the shape of the data. The
       sqlite-utils analyze-tables command inspects specified tables (or all tables) and calculates some  useful
       details about each of the columns in those tables.

       To inspect the tags table in the github.db database, run the following:

          sqlite-utils analyze-tables github.db tags

          tags.repo: (1/3)

            Total rows: 261
            Null rows: 0
            Blank rows: 0

            Distinct values: 14

            Most common:
              88: 107914493
              75: 140912432
              27: 206156866

            Least common:
              1: 209590345
              2: 206649770
              2: 303218369

          tags.name: (2/3)

            Total rows: 261
            Null rows: 0
            Blank rows: 0

            Distinct values: 175

            Most common:
              10: 0.2
              9: 0.1
              7: 0.3

            Least common:
              1: 0.1.1
              1: 0.11.1
              1: 0.1a2

          tags.sha: (3/3)

            Total rows: 261
            Null rows: 0
            Blank rows: 0

            Distinct values: 261

       For  each  column this tool displays the number of null rows, the number of blank rows (rows that contain
       an empty string), the number of distinct values and, for columns that are not entirely distinct, the most
       common and least common values.

       If you do not specify any tables every table in the database will be analyzed:

          sqlite-utils analyze-tables github.db

       If you wish to analyze one or more specific columns, use the -c option:

          sqlite-utils analyze-tables github.db tags -c sha

       To show more than 10 common values, use --common-limit 20.  To skip the most common or least common value
       analysis, use --no-most or --no-least:

          sqlite-utils analyze-tables github.db tags --common-limit 20 --no-least

   Saving the analyzed table details
       analyze-tables can take quite a while to run for large database files. You can save the  results  of  the
       analysis to a database table called _analyze_tables_ using the --save option:

          sqlite-utils analyze-tables github.db --save

       The _analyze_tables_ table has the following schema:

          CREATE TABLE [_analyze_tables_] (
              [table] TEXT,
              [column] TEXT,
              [total_rows] INTEGER,
              [num_null] INTEGER,
              [num_blank] INTEGER,
              [num_distinct] INTEGER,
              [most_common] TEXT,
              [least_common] TEXT,
              PRIMARY KEY ([table], [column])
          );

       The  most_common  and  least_common  columns will contain nested JSON arrays of the most common and least
       common values that look like this:

          [
              ["Del Libertador, Av", 5068],
              ["Alberdi Juan Bautista Av.", 4612],
              ["Directorio Av.", 4552],
              ["Rivadavia, Av", 4532],
              ["Yerbal", 4512],
              ["Cosquín", 4472],
              ["Estado Plurinacional de Bolivia", 4440],
              ["Gordillo Timoteo", 4424],
              ["Montiel", 4360],
              ["Condarco", 4288]
          ]

   Creating an empty database
       You can create a new empty database file using the create-database command:

          sqlite-utils create-database empty.db

       To enable WAL mode on the newly created database add the --enable-wal option:

          sqlite-utils create-database empty.db --enable-wal

       To enable SpatiaLite metadata on a newly created database, add the --init-spatialite flag:

          sqlite-utils create-database empty.db --init-spatialite

       That will look for SpatiaLite in a set of predictable locations. To load it from somewhere else, use  the
       --load-extension option:

          sqlite-utils create-database empty.db --init-spatialite --load-extension /path/to/spatialite.so

   Inserting JSON data
       If  you  have  data  as JSON, you can use sqlite-utils insert tablename to insert it into a database. The
       table will be created with the correct (automatically detected) columns if it does not already exist.

       You can pass in a single JSON object or a list of JSON objects, either as a filename or piped directly to
       standard-in (by using - as the filename).

       Here's the simplest possible example:

          echo '{"name": "Cleo", "age": 4}' | sqlite-utils insert dogs.db dogs -

       To specify a column as the primary key, use --pk=column_name.

       To create a compound primary key across more than one column, use --pk multiple times.

       If you feed it a JSON list it will insert multiple records. For example, if dogs.json looks like this:

          [
              {
                  "id": 1,
                  "name": "Cleo",
                  "age": 4
              },
              {
                  "id": 2,
                  "name": "Pancakes",
                  "age": 2
              },
              {
                  "id": 3,
                  "name": "Toby",
                  "age": 6
              }
          ]

       You can import all three records into an automatically created dogs table and set the id  column  as  the
       primary key like so:

          sqlite-utils insert dogs.db dogs dogs.json --pk=id

       You can skip inserting any records that have a primary key that already exists using --ignore:

          sqlite-utils insert dogs.db dogs dogs.json --ignore

       You can delete all the existing rows in the table before inserting the new records using --truncate:

          sqlite-utils insert dogs.db dogs dogs.json --truncate

       You can add the --analyze option to run ANALYZE against the table after the rows have been inserted.

   Inserting binary data
       You  can  insert binary data into a BLOB column by first encoding it using base64 and then structuring it
       like this:

          [
              {
                  "name": "transparent.gif",
                  "content": {
                      "$base64": true,
                      "encoded": "R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"
                  }
              }
          ]

   Inserting newline-delimited JSON
       You can also import newline-delimited JSON using the --nl option:

          echo '{"id": 1, "name": "Cleo"}
          {"id": 2, "name": "Suna"}' | sqlite-utils insert creatures.db creatures - --nl

       Newline-delimited JSON consists of full JSON objects separated by newlines.

       If you are processing data using jq you can use the jq -c option to output valid newline-delimited JSON.

       Since Datasette can export newline-delimited JSON, you can combine the Datasette  and  sqlite-utils  like
       so:

          curl -L "https://latest.datasette.io/fixtures/facetable.json?_shape=array&_nl=on" \
              | sqlite-utils insert nl-demo.db facetable - --pk=id --nl

       You  can also pipe sqlite-utils together to create a new SQLite database file containing the results of a
       SQL query against another database:

          sqlite-utils sf-trees.db \
              "select TreeID, qAddress, Latitude, Longitude from Street_Tree_List" --nl \
            | sqlite-utils insert saved.db trees - --nl

          sqlite-utils saved.db "select * from trees limit 5" --csv

          TreeID,qAddress,Latitude,Longitude
          141565,501X Baker St,37.7759676911831,-122.441396661871
          232565,940 Elizabeth St,37.7517102172731,-122.441498017841
          119263,495X Lakeshore Dr,,
          207368,920 Kirkham St,37.760210314285,-122.47073935813
          188702,1501 Evans Ave,37.7422086702947,-122.387293152263

   Flattening nested JSON objects
       sqlite-utils insert and sqlite-utils memory both expect incoming JSON data to consist of an array of JSON
       objects, where the top-level keys of each object will become columns in the created database table.

       If your data is nested you can use the --flatten option to create  columns  that  are  derived  from  the
       nested data.

       Consider this example document, in a file called log.json:

          {
              "httpRequest": {
                  "latency": "0.112114537s",
                  "requestMethod": "GET",
                  "requestSize": "534",
                  "status": 200
              },
              "insertId": "6111722f000b5b4c4d4071e2",
              "labels": {
                  "service": "datasette-io"
              }
          }

       Inserting  this into a table using sqlite-utils insert logs.db logs log.json will create a table with the
       following schema:

          CREATE TABLE [logs] (
             [httpRequest] TEXT,
             [insertId] TEXT,
             [labels] TEXT
          );

       With the --flatten option columns will  be  created  using  topkey_nextkey  column  names  -  so  running
       sqlite-utils insert logs.db logs log.json --flatten will create the following schema instead:

          CREATE TABLE [logs] (
             [httpRequest_latency] TEXT,
             [httpRequest_requestMethod] TEXT,
             [httpRequest_requestSize] TEXT,
             [httpRequest_status] INTEGER,
             [insertId] TEXT,
             [labels_service] TEXT
          );

   Inserting CSV or TSV data
       If your data is in CSV format, you can insert it using the --csv option:

          sqlite-utils insert dogs.db dogs dogs.csv --csv

       For tab-delimited data, use --tsv:

          sqlite-utils insert dogs.db dogs dogs.tsv --tsv

       Data  is  expected  to be encoded as Unicode UTF-8. If your data is an another character encoding you can
       specify it using the --encoding option:

          sqlite-utils insert dogs.db dogs dogs.tsv --tsv --encoding=latin-1

       To stop inserting after a specified number of records - useful for getting a faster preview  of  a  large
       file - use the --stop-after option:

          sqlite-utils insert dogs.db dogs dogs.csv --csv --stop-after=10

       A  progress  bar  is  displayed  when inserting data from a file. You can hide the progress bar using the
       --silent option.

       By default every column inserted from a CSV or TSV file will be of type  TEXT.  To  automatically  detect
       column  types  - resulting in a mix of TEXT, INTEGER and FLOAT columns, use the --detect-types option (or
       its shortcut -d).

       For example, given a creatures.csv file containing this:

          name,age,weight
          Cleo,6,45.5
          Dori,1,3.5

       The following command:

          sqlite-utils insert creatures.db creatures creatures.csv --csv --detect-types

       Will produce this schema:

          sqlite-utils schema creatures.db

          CREATE TABLE "creatures" (
             [name] TEXT,
             [age] INTEGER,
             [weight] FLOAT
          );

       You can set the SQLITE_UTILS_DETECT_TYPES environment variable if  you  want  --detect-types  to  be  the
       default behavior:

          export SQLITE_UTILS_DETECT_TYPES=1

       If a CSV or TSV file includes empty cells, like this one:

          name,age,weight
          Cleo,6,
          Dori,,3.5

       They will be imported into SQLite as empty string values, "".

       To import them as NULL values instead, use the --empty-null option:

          sqlite-utils insert creatures.db creatures creatures.csv --csv --empty-null

   Alternative delimiters and quote characters
       If  your  file  uses a delimiter other than , or a quote character other than " you can attempt to detect
       delimiters or you can specify them explicitly.

       The --sniff option can be used to attempt to detect the delimiters:

          sqlite-utils insert dogs.db dogs dogs.csv --sniff

       Alternatively, you can specify them using the --delimiter and --quotechar options.

       Here's a CSV file that uses ; for delimiters and the | symbol for quote characters:

          name;description
          Cleo;|Very fine; a friendly dog|
          Pancakes;A local corgi

       You can import that using:

          sqlite-utils insert dogs.db dogs dogs.csv --delimiter=";" --quotechar="|"

       Passing --delimiter, --quotechar or --sniff implies --csv, so you can omit the --csv option.

   CSV files without a header row
       The first row of any CSV or TSV file is expected to contain the names of the columns in that file.

       If your file does not include this row, you can use the --no-headers option  to  specify  that  the  tool
       should not use that fist row as headers.

       If  you  do this, the table will be created with column names called untitled_1 and untitled_2 and so on.
       You can then rename them using the sqlite-utils transform ... --rename command, see Transforming tables.

   Inserting unstructured data with --lines and --text
       If you have an unstructured file you can insert its contents into a  table  with  a  single  line  column
       containing  each  line  from  the file using --lines. This can be useful if you intend to further analyze
       those lines using SQL string functions or sqlite-utils convert:

          sqlite-utils insert logs.db loglines logfile.log --lines

       This will produce the following schema:

          CREATE TABLE [loglines] (
             [line] TEXT
          );

       You can also insert the entire contents of the file into a single column called text using --text:

          sqlite-utils insert content.db content file.txt --text

       The schema here will be:

          CREATE TABLE [content] (
             [text] TEXT
          );

   Applying conversions while inserting data
       The --convert option can be used to apply a Python conversion function to  imported  data  before  it  is
       inserted into the database. It works in a similar way to sqlite-utils convert.

       Your Python function will be passed a dictionary called row for each item that is being imported. You can
       modify  that  dictionary  and  return it - or return a fresh dictionary - to change the data that will be
       inserted.

       Given a JSON file called dogs.json containing this:

          [
              {"id": 1, "name": "Cleo"},
              {"id": 2, "name": "Pancakes"}
          ]

       The following command will insert that data and add an is_good column set to 1 for each dog:

          sqlite-utils insert dogs.db dogs dogs.json --convert 'row["is_good"] = 1'

       The --convert option also works with the --csv, --tsv and --nl insert options.

       As with sqlite-utils convert you can use --import to import  additional  Python  modules,  see  Importing
       additional modules for details.

       You  can  also  pass  code that runs some initialization steps and defines a convert(value) function, see
       Defining a convert() function.

   --convert with --lines
       Things work slightly differently when combined with the --lines or --text options.

       With --lines, instead of being passed a row dictionary  your  function  will  be  passed  a  line  string
       representing each line of the input. Given a file called access.log containing the following:

          INFO:     127.0.0.1:60581 - GET / HTTP/1.1 200 OK
          INFO:     127.0.0.1:60581 - GET /foo/-/static/app.css?cead5a HTTP/1.1 200 OK

       You could convert it into structured data like so:

          sqlite-utils insert logs.db loglines access.log --convert '
          type, source, _, verb, path, _, status, _ = line.split()
          return {
              "type": type,
              "source": source,
              "verb": verb,
              "path": path,
              "status": status,
          }' --lines

       The resulting table would look like this:
                      ┌───────┬─────────────────┬──────┬──────────────────────────────┬────────┐
                      │ type  │ source          │ verb │ path                         │ status │
                      ├───────┼─────────────────┼──────┼──────────────────────────────┼────────┤
                      │ INFO: │ 127.0.0.1:60581 │ GET  │ /                            │ 200    │
                      ├───────┼─────────────────┼──────┼──────────────────────────────┼────────┤
                      │ INFO: │ 127.0.0.1:60581 │ GET  │ /foo/-/static/app.css?cead5a │ 200    │
                      └───────┴─────────────────┴──────┴──────────────────────────────┴────────┘

   --convert with --text
       With  --text  the entire input to the command will be made available to the function as a variable called
       text.

       The function can return a single dictionary which will be inserted as a single row, or it  can  return  a
       list or iterator of dictionaries, each of which will be inserted.

       Here's how to use --convert and --text to insert one record per word in the input:

          echo 'A bunch of words' | sqlite-utils insert words.db words - \
              --text --convert '({"word": w} for w in text.split())'

       The result looks like this:

          sqlite-utils dump words.db

          BEGIN TRANSACTION;
          CREATE TABLE [words] (
             [word] TEXT
          );
          INSERT INTO "words" VALUES('A');
          INSERT INTO "words" VALUES('bunch');
          INSERT INTO "words" VALUES('of');
          INSERT INTO "words" VALUES('words');
          COMMIT;

   Insert-replacing data
       The  --replace  option  to  insert  causes  any existing records with the same primary key to be replaced
       entirely by the new records.

       To replace a dog with in ID of 2 with a new record, run the following:

          echo '{"id": 2, "name": "Pancakes", "age": 3}' | \
              sqlite-utils insert dogs.db dogs - --pk=id --replace

   Upserting data
       Upserting is update-or-insert. If a row exists with the specified primary key the provided  columns  will
       be updated. If no row exists that row will be created.

       Unlike  insert  --replace,  an upsert will ignore any column values that exist but are not present in the
       upsert document.

       For example:

          echo '{"id": 2, "age": 4}' | \
              sqlite-utils upsert dogs.db dogs - --pk=id

       This will update the dog with an ID of 2 to have an age of 4, creating a new record (with a null name) if
       one does not exist. If a row DOES exist the name will be left as-is.

       The command will fail if you reference columns that do not exist on the table.  To  automatically  create
       missing columns, use the --alter option.

       NOTE:
          upsert  in sqlite-utils 1.x worked like insert ... --replace does in 2.x. See issue #66 for details of
          this change.

   Executing SQL in bulk
       If you have a JSON, newline-delimited JSON, CSV or TSV file you can execute a bulk SQL query  using  each
       of the records in that file using the sqlite-utils bulk command.

       The  command  takes  the database file, the SQL to be executed and the file containing records to be used
       when evaluating the SQL query.

       The SQL query should include :named parameters that match the keys in the records.

       For example, given a chickens.csv CSV file containing the following:

          id,name
          1,Blue
          2,Snowy
          3,Azi
          4,Lila
          5,Suna
          6,Cardi

       You could insert those rows into a pre-created chickens table like so:

          sqlite-utils bulk chickens.db \
            'insert into chickens (id, name) values (:id, :name)' \
            chickens.csv --csv

       This command takes the same options as the sqlite-utils insert command - so it defaults to expecting JSON
       but can accept other formats using --csv or --tsv or --nl or other options described above.

       By default all of the SQL queries will be executed in a single transaction. To commit every  20  records,
       use --batch-size 20.

   Inserting data from files
       The  insert-files  command  can be used to insert the content of files, along with their metadata, into a
       SQLite table.

       Here's an example that inserts all of the GIF files in the current directory  into  a  gifs.db  database,
       placing the file contents in an images table:

          sqlite-utils insert-files gifs.db images *.gif

       You  can  also  pass one or more directories, in which case every file in those directories will be added
       recursively:

          sqlite-utils insert-files gifs.db images path/to/my-gifs

       By default this command will create a table with the following schema:

          CREATE TABLE [images] (
              [path] TEXT PRIMARY KEY,
              [content] BLOB,
              [size] INTEGER
          );

       Content will be treated as binary by default and stored in a BLOB column. You can use the  --text  option
       to store that content in a TEXT column instead.

       You  can  customize  the  schema  using one or more -c options. For a table schema that includes just the
       path, MD5 hash and last modification time of the file, you would use this:

          sqlite-utils insert-files gifs.db images *.gif -c path -c md5 -c mtime --pk=path

       This will result in the following schema:

          CREATE TABLE [images] (
              [path] TEXT PRIMARY KEY,
              [md5] TEXT,
              [mtime] FLOAT
          );

       Note that there's no content column here at all - if you specify custom columns  using  -c  you  need  to
       include -c content to create that column.

       You  can change the name of one of these columns using a -c colname:coldef parameter. To rename the mtime
       column to last_modified you would use this:

          sqlite-utils insert-files gifs.db images *.gif \
              -c path -c md5 -c last_modified:mtime --pk=path

       You can pass --replace or --upsert to indicate what should happen if you try to insert  a  file  with  an
       existing primary key. Pass --alter to cause any missing columns to be added to the table.

       The full list of column definitions you can use is as follows:

       name   The name of the file, e.g. cleo.jpg

       path   The path to the file relative to the root folder, e.g. pictures/cleo.jpg

       fullpath
              The fully resolved path to the image, e.g. /home/simonw/pictures/cleo.jpg

       sha256 The SHA256 hash of the file contents

       md5    The MD5 hash of the file contents

       mode   The permission bits of the file, as an integer - you may want to convert this to octal

       content
              The binary file contents, which will be stored as a BLOB

       content_text
              The text file contents, which will be stored as TEXT

       mtime  The modification time of the file, as floating point seconds since the Unix epoch

       ctime  The creation time of the file, as floating point seconds since the Unix epoch

       mtime_int
              The modification time as an integer rather than a float

       ctime_int
              The creation time as an integer rather than a float

       mtime_iso
              The modification time as an ISO timestamp, e.g. 2020-07-27T04:24:06.654246

       ctime_iso
              The creation time is an ISO timestamp

       size   The integer size of the file in bytes

       stem   The filename without the extension - for file.txt.gz this would be file.txt

       suffix The file extension - for file.txt.gz this would be .gz

       You can insert data piped from standard input like this:

          cat dog.jpg | sqlite-utils insert-files dogs.db pics - --name=dog.jpg

       The  -  argument  indicates  data  should be read from standard input. The string passed using the --name
       option will be used for the file name and path values.

       When inserting data from standard input only the following column definitions are supported: name,  path,
       content, content_text, sha256, md5 and size.

   Converting data in columns
       The convert command can be used to transform the data in a specified column - for example to parse a date
       string into an ISO timestamp, or to split a string of tags into a JSON array.

       The  command  accepts  a  database, table, one or more columns and a string of Python code to be executed
       against the values from those columns. The following example would replace the  values  in  the  headline
       column in the articles table with an upper-case version:

          sqlite-utils convert content.db articles headline 'value.upper()'

       The  Python  code  is passed as a string. Within that Python code the value variable will be the value of
       the current column.

       The code you provide will be compiled into a function that takes value as a single argument. If you break
       your function body into multiple lines the last line should be a return statement:

          sqlite-utils convert content.db articles headline '
          value = str(value)
          return value.upper()'

       Your code will be automatically wrapped in a  function,  but  you  can  also  define  a  function  called
       convert(value) which will be called, if available:

          sqlite-utils convert content.db articles headline '
          def convert(value):
              return value.upper()'

       Use a CODE value of - to read from standard input:

          cat mycode.py | sqlite-utils convert content.db articles headline -

       Where mycode.py contains a fragment of Python code that looks like this:

          def convert(value):
              return value.upper()

       The  conversion will be applied to every row in the specified table. You can limit that to just rows that
       match a WHERE clause using --where:

          sqlite-utils convert content.db articles headline 'value.upper()' \
              --where "headline like '%cat%'"

       You can include named parameters in your where clause  and  populate  them  using  one  or  more  --param
       options:

          sqlite-utils convert content.db articles headline 'value.upper()' \
              --where "headline like :query" \
              --param query '%cat%'

       The  --dry-run  option  will  output  a  preview  of  the  conversion against the first ten rows, without
       modifying the database.

       By default any rows with a falsey value for the column - such as 0 or null - will  be  skipped.  Use  the
       --no-skip-false option to disable this behaviour.

   Importing additional modules
       You  can specify Python modules that should be imported and made available to your code using one or more
       --import options. This example uses the textwrap module to wrap the content column at 100 characters:

          sqlite-utils convert content.db articles content \
              '"\n".join(textwrap.wrap(value, 100))' \
              --import=textwrap

       This supports nested imports as well, for example to use ElementTree:

          sqlite-utils convert content.db articles content \
              'xml.etree.ElementTree.fromstring(value).attrib["title"]' \
              --import=xml.etree.ElementTree

   Using the debugger
       If an error occurs while running your conversion operation you may see a message like this:

          user-defined function raised exception

       Add the --pdb option to catch the error and open the  Python  debugger  at  that  point.  The  conversion
       operation will exit after you type q in the debugger.

       Here's  an  example  debugging  session.  First,  create a articles table with invalid XML in the content
       column:

          echo '{"content": "This is not XML"}' | sqlite-utils insert content.db articles -

       Now run the conversion with the --pdb option:

          sqlite-utils convert content.db articles content \
              'xml.etree.ElementTree.fromstring(value).attrib["title"]' \
              --import=xml.etree.ElementTree \
              --pdb

       When the error occurs the debugger will open:

          Exception raised, dropping into pdb...: syntax error: line 1, column 0
          > .../python3.11/xml/etree/ElementTree.py(1338)XML()
          -> parser.feed(text)
          (Pdb) args
          text = 'This is not XML'
          parser = <xml.etree.ElementTree.XMLParser object at 0x102c405e0>
          (Pdb) q

       args here shows the arguments to the current function in the stack. The Python pdb documentation has full
       details on the other available commands.

   Defining a convert() function
       Instead of providing a single line of code to be executed against each value, you can define  a  function
       called convert(value).

       This  mechanism  can  be  used  to execute one-off initialization code that runs once at the start of the
       conversion run.

       The following example adds a new score column, then updates it to list a  random  number  -  after  first
       seeding the random number generator to ensure that multiple runs produce the same results:

          sqlite-utils add-column content.db articles score float --not-null-default 1.0
          sqlite-utils convert content.db articles score '
          import random
          random.seed(10)

          def convert(value):
              return random.random()
          '

   sqlite-utils convert recipes
       Various built-in recipe functions are available for common operations. These are:

       r.jsonsplit(value, delimiter=',', type=<class 'str'>)
              Convert a string like a,b,c into a JSON array ["a", "b", "c"]

              The delimiter parameter can be used to specify a different delimiter.

              The  type  parameter  can  be  set to float or int to produce a JSON array of different types, for
              example if the column's string value was 1.2,3,4.5 the following:

                 r.jsonsplit(value, type=float)

              Would produce an array like this: [1.2, 3.0, 4.5]

       r.parsedate(value, dayfirst=False, yearfirst=False, errors=None)
              Parse a date and convert it to ISO date format: yyyy-mm-dd

              In the case of dates such as 03/04/05 U.S. MM/DD/YY format is assumed - you can use  dayfirst=True
              or yearfirst=True to change how these ambiguous dates are interpreted.

              Use the errors= parameter to specify what should happen if a value cannot be parsed.

              By default, if any value cannot be parsed an error will be occurred and all values will be left as
              they were.

              Set errors=r.IGNORE to ignore any values that cannot be parsed, leaving them unchanged.

              Set errors=r.SET_NULL to set any values that cannot be parsed to null.

       r.parsedatetime(value, dayfirst=False, yearfirst=False, errors=None)
              Parse a datetime and convert it to ISO datetime format: yyyy-mm-ddTHH:MM:SS

       These recipes can be used in the code passed to sqlite-utils convert like this:

          sqlite-utils convert my.db mytable mycolumn \
            'r.jsonsplit(value)'

       To use any of the documented parameters, do this:

          sqlite-utils convert my.db mytable mycolumn \
            'r.jsonsplit(value, delimiter=":")'

   Saving the result to a different column
       The  --output  and  --output-type  options can be used to save the result of the conversion to a separate
       column, which will be created if that column does not already exist:

          sqlite-utils convert content.db articles headline 'value.upper()' \
            --output headline_upper

       The type of the created column defaults to text, but a different  column  type  can  be  specified  using
       --output-type.  This  example will create a new floating point column called id_as_a_float with a copy of
       each item's ID increased by 0.5:

          sqlite-utils convert content.db articles id 'float(value) + 0.5' \
            --output id_as_a_float \
            --output-type float

       You can drop the original column at the end of the operation by adding --drop.

   Converting a column into multiple columns
       Sometimes you may wish to convert a single column into multiple derived columns.  For  example,  you  may
       have  a  location  column  containing latitude,longitude values which you wish to split out into separate
       latitude and longitude columns.

       You can achieve this using the --multi option to sqlite-utils convert. This option  expects  your  Python
       code  to  return  a  Python dictionary: new columns well be created and populated for each of the keys in
       that dictionary.

       For the latitude,longitude example you would use the following:

          sqlite-utils convert demo.db places location \
          'bits = value.split(",")
          return {
            "latitude": float(bits[0]),
            "longitude": float(bits[1]),
          }' --multi

       The type of the returned values will be taken into  account  when  creating  the  new  columns.  In  this
       example, the resulting database schema will look like this:

          CREATE TABLE [places] (
              [location] TEXT,
              [latitude] FLOAT,
              [longitude] FLOAT
          );

       The  code  function  can  also  return  None,  in which case its output will be ignored. You can drop the
       original column at the end of the operation by adding --drop.

   Creating tables
       Most of the time creating tables by inserting example data is the  quickest  approach.  If  you  need  to
       create an empty table in advance of inserting data you can do so using the create-table command:

          sqlite-utils create-table mydb.db mytable id integer name text --pk=id

       This  will  create a table called mytable with two columns - an integer id column and a text name column.
       It will set the id column to be the primary key.

       You can pass as many column-name column-type pairs as you like. Valid types are integer, text, float  and
       blob.

       You  can specify columns that should be NOT NULL using --not-null colname. You can specify default values
       for columns using --default colname defaultvalue.

          sqlite-utils create-table mydb.db mytable \
              id integer \
              name text \
              age integer \
              is_good integer \
              --not-null name \
              --not-null age \
              --default is_good 1 \
              --pk=id

          sqlite-utils tables mydb.db --schema -t

          table    schema
          -------  --------------------------------
          mytable  CREATE TABLE [mytable] (
                      [id] INTEGER PRIMARY KEY,
                      [name] TEXT NOT NULL,
                      [age] INTEGER NOT NULL,
                      [is_good] INTEGER DEFAULT '1'
                  )

       You can specify foreign key relationships  between  the  tables  you  are  creating  using  --fk  colname
       othertable othercolumn:

          sqlite-utils create-table books.db authors \
              id integer \
              name text \
              --pk=id

          sqlite-utils create-table books.db books \
              id integer \
              title text \
              author_id integer \
              --pk=id \
              --fk author_id authors id

          sqlite-utils tables books.db --schema -t

          table    schema
          -------  -------------------------------------------------
          authors  CREATE TABLE [authors] (
                      [id] INTEGER PRIMARY KEY,
                      [name] TEXT
                   )
          books    CREATE TABLE [books] (
                      [id] INTEGER PRIMARY KEY,
                      [title] TEXT,
                      [author_id] INTEGER REFERENCES [authors]([id])
                   )

       If  a  table  with the same name already exists, you will get an error. You can choose to silently ignore
       this error with --ignore, or you can replace the existing table with a new, empty table using --replace.

       You can also pass --transform to transform the existing table to match the  new  schema.  See  Explicitly
       creating a table in the Python library documentation for details of how this option works.

   Renaming a table
       Yo ucan rename a table using the rename-table command:

          sqlite-utils rename-table mydb.db oldname newname

       Pass  --ignore  to  ignore  any errors caused by the table not existing, or the new name already being in
       use.

   Duplicating tables
       The duplicate command duplicates a table - creating a new table with the same schema and a copy of all of
       the rows:

          sqlite-utils duplicate books.db authors authors_copy

   Dropping tables
       You can drop a table using the drop-table command:

          sqlite-utils drop-table mydb.db mytable

       Use --ignore to ignore the error if the table does not exist.

   Transforming tables
       The transform command allows you to apply complex transformations to a table that cannot  be  implemented
       using a regular SQLite ALTER TABLE command. See Transforming a table for details of how this works.

          sqlite-utils transform mydb.db mytable \
              --drop column1 \
              --rename column2 column_renamed

       Every  option  for  this  table  (with  the  exception of --pk-none) can be specified multiple times. The
       options are as follows:

       --type column-name new-type
              Change the type of the specified column. Valid types are integer, text, float, blob.

       --drop column-name
              Drop the specified column.

       --rename column-name new-name
              Rename this column to a new name.

       --column-order column
              Use this multiple times to specify a new order for your columns. -o shortcut is also available.

       --not-null column-name
              Set this column as NOT NULL.

       --not-null-false column-name
              For a column that is currently set as NOT NULL, remove the NOT NULL.

       --pk column-name
              Change the primary key column for this table. Pass --pk multiple times if you  want  to  create  a
              compound primary key.

       --pk-none
              Remove the primary key from this table, turning it into a rowid table.

       --default column-name value
              Set the default value of this column.

       --default-none column
              Remove the default value for this column.

       --drop-foreign-key column
              Drop the specified foreign key.

       --add-foregn-key column other_table other_column
              Add a foreign key constraint to column pointing to other_table.other_column.

       If  you  want  to see the SQL that will be executed to make the change without actually executing it, add
       the --sql flag. For example:

          sqlite-utils transform fixtures.db roadside_attractions \
              --rename pk id \
              --default name Untitled \
              --column-order id \
              --column-order longitude \
              --column-order latitude \
              --drop address \
              --sql

          CREATE TABLE [roadside_attractions_new_4033a60276b9] (
             [id] INTEGER PRIMARY KEY,
             [longitude] FLOAT,
             [latitude] FLOAT,
             [name] TEXT DEFAULT 'Untitled'
          );
          INSERT INTO [roadside_attractions_new_4033a60276b9] ([longitude], [latitude], [id], [name])
             SELECT [longitude], [latitude], [pk], [name] FROM [roadside_attractions];
          DROP TABLE [roadside_attractions];
          ALTER TABLE [roadside_attractions_new_4033a60276b9] RENAME TO [roadside_attractions];

   Adding a primary key to a rowid table
       SQLite tables that are created without an explicit primary key are created as rowid  tables.  They  still
       have a numeric primary key which is available in the rowid column, but that column is not included in the
       output of select *. Here's an example:

          echo '[{"name": "Azi"}, {"name": "Suna"}]' | \
              sqlite-utils insert chickens.db chickens -
          sqlite-utils schema chickens.db

          CREATE TABLE [chickens] (
             [name] TEXT
          );

          sqlite-utils chickens.db 'select * from chickens'

          [{"name": "Azi"},
           {"name": "Suna"}]

          sqlite-utils chickens.db 'select rowid, * from chickens'

          [{"rowid": 1, "name": "Azi"},
           {"rowid": 2, "name": "Suna"}]

       You  can  use  sqlite-utils transform ... --pk id to add a primary key column called id to the table. The
       primary key will be created as an INTEGER PRIMARY KEY and the existing rowid values will be copied across
       to it. It will automatically increment as new rows are added to the table:

          sqlite-utils transform chickens.db chickens --pk id

          sqlite-utils schema chickens.db

          CREATE TABLE "chickens" (
             [id] INTEGER PRIMARY KEY,
             [name] TEXT
          );

          sqlite-utils chickens.db 'select * from chickens'

          [{"id": 1, "name": "Azi"},
           {"id": 2, "name": "Suna"}]

          echo '{"name": "Cardi"}' | sqlite-utils insert chickens.db chickens -

          sqlite-utils chickens.db 'select * from chickens'

          [{"id": 1, "name": "Azi"},
           {"id": 2, "name": "Suna"},
           {"id": 3, "name": "Cardi"}]

   Extracting columns into a separate table
       The sqlite-utils extract command can be used to extract specified columns into a separate table.

       Take a look at the Python API documentation for Extracting columns into a separate table for  a  detailed
       description of how this works, including examples of table schemas before and after running an extraction
       operation.

       The  command  takes  a  database,  table and one or more columns that should be extracted. To extract the
       species column from the trees table you would run:

          sqlite-utils extract my.db trees species

       This would produce the following schema:

          CREATE TABLE "trees" (
              [id] INTEGER PRIMARY KEY,
              [TreeAddress] TEXT,
              [species_id] INTEGER,
              FOREIGN KEY(species_id) REFERENCES species(id)
          );
          CREATE TABLE [species] (
              [id] INTEGER PRIMARY KEY,
              [species] TEXT
          );
          CREATE UNIQUE INDEX [idx_species_species]
              ON [species] ([species]);

       The command takes the following options:

       --table TEXT
              The name of the lookup to extract columns to. This defaults to using the name of the columns  that
              are being extracted.

       --fk-column TEXT
              The name of the foreign key column to add to the table. Defaults to columnname_id.

       --rename <TEXT TEXT>
              Use this option to rename the columns created in the new lookup table.

       --silent
              Don't display the progress bar.

       Here's  a  more complex example that makes use of these options. It converts this CSV file full of global
       power plants into SQLite, then extracts the country and country_long columns into  a  separate  countries
       table:

          wget 'https://github.com/wri/global-power-plant-database/blob/232a6666/output_database/global_power_plant_database.csv?raw=true'
          sqlite-utils insert global.db power_plants \
              'global_power_plant_database.csv?raw=true' --csv
          # Extract those columns:
          sqlite-utils extract global.db power_plants country country_long \
              --table countries \
              --fk-column country_id \
              --rename country_long name

       After running the above, the command sqlite-utils schema global.db reveals the following schema:

          CREATE TABLE [countries] (
             [id] INTEGER PRIMARY KEY,
             [country] TEXT,
             [name] TEXT
          );
          CREATE TABLE "power_plants" (
             [country_id] INTEGER,
             [name] TEXT,
             [gppd_idnr] TEXT,
             [capacity_mw] TEXT,
             [latitude] TEXT,
             [longitude] TEXT,
             [primary_fuel] TEXT,
             [other_fuel1] TEXT,
             [other_fuel2] TEXT,
             [other_fuel3] TEXT,
             [commissioning_year] TEXT,
             [owner] TEXT,
             [source] TEXT,
             [url] TEXT,
             [geolocation_source] TEXT,
             [wepp_id] TEXT,
             [year_of_capacity_data] TEXT,
             [generation_gwh_2013] TEXT,
             [generation_gwh_2014] TEXT,
             [generation_gwh_2015] TEXT,
             [generation_gwh_2016] TEXT,
             [generation_gwh_2017] TEXT,
             [generation_data_source] TEXT,
             [estimated_generation_gwh] TEXT,
             FOREIGN KEY([country_id]) REFERENCES [countries]([id])
          );
          CREATE UNIQUE INDEX [idx_countries_country_name]
              ON [countries] ([country], [name]);

   Creating views
       You can create a view using the create-view command:

          sqlite-utils create-view mydb.db version "select sqlite_version()"

          sqlite-utils mydb.db "select * from version"

          [{"sqlite_version()": "3.31.1"}]

       Use  --replace to replace an existing view of the same name, and --ignore to do nothing if a view already
       exists.

   Dropping views
       You can drop a view using the drop-view command:

          sqlite-utils drop-view myview

       Use --ignore to ignore the error if the view does not exist.

   Adding columns
       You can add a column using the add-column command:

          sqlite-utils add-column mydb.db mytable nameofcolumn text

       The last argument here is the type of the column to be created. You can use one of text,  integer,  float
       or blob. If you leave it off, text will be used.

       You can add a column that is a foreign key reference to another table using the --fk option:

          sqlite-utils add-column mydb.db dogs species_id --fk species

       This  will  automatically  detect  the name of the primary key on the species table and use that (and its
       type) for the new column.

       You can explicitly specify the column you wish to reference using --fk-col:

          sqlite-utils add-column mydb.db dogs species_id --fk species --fk-col ref

       You can set a NOT NULL DEFAULT 'x' constraint on the new column using --not-null-default:

          sqlite-utils add-column mydb.db dogs friends_count integer --not-null-default 0

   Adding columns automatically on insert/update
       You can use the --alter option to automatically add  new  columns  if  the  data  you  are  inserting  or
       upserting is of a different shape:

          sqlite-utils insert dogs.db dogs new-dogs.json --pk=id --alter

   Adding foreign key constraints
       The  add-foreign-key  command  can  be  used  to  add  new  foreign key references to an existing table -
       something which SQLite's ALTER TABLE command does not support.

       To add a foreign key constraint pointing the books.author_id column to authors.id in  another  table,  do
       this:

          sqlite-utils add-foreign-key books.db books author_id authors id

       If  you omit the other table and other column references sqlite-utils will attempt to guess them - so the
       above example could instead look like this:

          sqlite-utils add-foreign-key books.db books author_id

       Add --ignore to ignore an existing foreign key (as opposed to returning an error):

          sqlite-utils add-foreign-key books.db books author_id --ignore

       See Adding foreign key constraints in the Python API documentation for further details, including how the
       automatic table guessing mechanism works.

   Adding multiple foreign keys at once
       Adding a foreign key requires a VACUUM. On large databases this can be an expensive operation, so if  you
       are  adding  multiple  foreign  keys you can combine them into one operation (and hence one VACUUM) using
       add-foreign-keys:

          sqlite-utils add-foreign-keys books.db \
              books author_id authors id \
              authors country_id countries id

       When you are using this command each foreign key needs to be defined in full, as  four  arguments  -  the
       table, column, other table and other column.

   Adding indexes for all foreign keys
       If  you  want to ensure that every foreign key column in your database has a corresponding index, you can
       do so like this:

          sqlite-utils index-foreign-keys books.db

   Setting defaults and not null constraints
       You can use the --not-null and --default options (to both insert and  upsert)  to  specify  columns  that
       should be NOT NULL or to set database defaults for one or more specific columns:

          sqlite-utils insert dogs.db dogs_with_scores dogs-with-scores.json \
              --not-null=age \
              --not-null=name \
              --default age 2 \
              --default score 5

   Creating indexes
       You can add an index to an existing table using the create-index command:

          sqlite-utils create-index mydb.db mytable col1 [col2...]

       This can be used to create indexes against a single column or multiple columns.

       The  name  of  the index will be automatically derived from the table and columns. To specify a different
       name, use --name=name_of_index.

       Use the --unique option to create a unique index.

       Use --if-not-exists to avoid attempting to create the index if one with that name already exists.

       To add an index on a column in descending order, prefix the column with  a  hyphen.  Since  this  can  be
       confused for a command-line option you need to construct that like this:

          sqlite-utils create-index mydb.db mytable -- col1 -col2 col3

       This will create an index on that table on (col1, col2 desc, col3).

       If  your  column  names are already prefixed with a hyphen you'll need to manually execute a CREATE INDEX
       SQL statement to add indexes to them rather than using this tool.

       Add the --analyze option to run ANALYZE against the index after it has been created.

   Configuring full-text search
       You can enable SQLite full-text search on a table and a set of columns like this:

          sqlite-utils enable-fts mydb.db documents title summary

       This will use SQLite's FTS5 module by default. Use --fts4 if you want to use FTS4:

          sqlite-utils enable-fts mydb.db documents title summary --fts4

       The enable-fts command will populate the new index with all existing documents. If  you  later  add  more
       documents you will need to use populate-fts to cause them to be indexed as well:

          sqlite-utils populate-fts mydb.db documents title summary

       A  better  solution  here  is to use database triggers. You can set up database triggers to automatically
       update the full-text index using the --create-triggers option when you first run enable-fts:

          sqlite-utils enable-fts mydb.db documents title summary --create-triggers

       To set a custom FTS tokenizer, e.g. to enable Porter stemming, use --tokenize=:

          sqlite-utils populate-fts mydb.db documents title summary --tokenize=porter

       To remove the FTS tables and triggers you created, use disable-fts:

          sqlite-utils disable-fts mydb.db documents

       To rebuild one or more FTS tables (see Rebuilding a full-text search table), use rebuild-fts:

          sqlite-utils rebuild-fts mydb.db documents

       You can rebuild every FTS table by running rebuild-fts without passing any table names:

          sqlite-utils rebuild-fts mydb.db

   Executing searches
       Once you have configured full-text search for a table, you can search it using sqlite-utils search:

          sqlite-utils search mydb.db documents searchterm

       This command accepts the same output options as sqlite-utils query: --table, --csv, --tsv, --nl etc.

       By default it shows the most relevant matches first. You can specify a different sort order using the  -o
       option, which can take a column or a column followed by desc:

          # Sort by rowid
          sqlite-utils search mydb.db documents searchterm -o rowid
          # Sort by created in descending order
          sqlite-utils search mydb.db documents searchterm -o 'created desc'

       SQLite  advanced  search  syntax is enabled by default. To run a search with automatic quoting applied to
       the terms to avoid them being potentially interpreted as advanced search syntax use the --quote option.

       You can specify a subset of columns to be returned using the -c option one or more times:

          sqlite-utils search mydb.db documents searchterm -c title -c created

       By default all search results will be returned. You can use --limit  20  to  return  just  the  first  20
       results.

       Use the --sql option to output the SQL that would be executed, rather than running the query:

          sqlite-utils search mydb.db documents searchterm --sql

          with original as (
              select
                  rowid,
                  *
              from [documents]
          )
          select
              [original].*
          from
              [original]
              join [documents_fts] on [original].rowid = [documents_fts].rowid
          where
              [documents_fts] match :query
          order by
              [documents_fts].rank

   Enabling cached counts
       select  count(*)  queries  can  take a long time against large tables. sqlite-utils can speed these up by
       adding triggers to maintain a _counts table, see Cached table counts using triggers for details.

       The sqlite-utils enable-counts command can be used to configure these triggers, either for every table in
       the database or for specific tables.

          # Configure triggers for every table in the database
          sqlite-utils enable-counts mydb.db

          # Configure triggers just for specific tables
          sqlite-utils enable-counts mydb.db table1 table2

       If the _counts table ever becomes out-of-sync with the actual table counts you can repair  it  using  the
       reset-counts command:

          sqlite-utils reset-counts mydb.db

   Optimizing index usage with ANALYZE
       The  SQLite  ANALYZE  command builds a table of statistics which the query planner can use to make better
       decisions about which indexes to use for a given query.

       You should run ANALYZE if your database is large and you do not think your indexes are being  efficiently
       used.

       To run ANALYZE against every index in a database, use this:

          sqlite-utils analyze mydb.db

       You  can  run  it against specific tables, or against specific named indexes, by passing them as optional
       arguments:

          sqlite-utils analyze mydb.db mytable idx_mytable_name

       You can also run ANALYZE as part of another command using the --analyze option. This is supported by  the
       create-index, insert and upsert commands.

   Vacuum
       You can run VACUUM to optimize your database like so:

          sqlite-utils vacuum mydb.db

   Optimize
       The  optimize command can dramatically reduce the size of your database if you are using SQLite full-text
       search. It runs OPTIMIZE against all of your FTS4 and FTS5 tables, then runs VACUUM.

       If you just want to run OPTIMIZE without the VACUUM, use the --no-vacuum flag.

          # Optimize all FTS tables and then VACUUM
          sqlite-utils optimize mydb.db

          # Optimize but skip the VACUUM
          sqlite-utils optimize --no-vacuum mydb.db

       To optimize specific tables rather than every FTS table, pass those tables as extra arguments:

          sqlite-utils optimize mydb.db table_1 table_2

   WAL mode
       You can enable Write-Ahead Logging for a database file using the enable-wal command:

          sqlite-utils enable-wal mydb.db

       You can disable WAL mode using disable-wal:

          sqlite-utils disable-wal mydb.db

       Both of these commands accept one or more database files as arguments.

   Dumping the database to SQL
       The dump command outputs a SQL dump of the schema and full contents of the specified database file:

          sqlite-utils dump mydb.db
          BEGIN TRANSACTION;
          CREATE TABLE ...
          ...
          COMMIT;

   Loading SQLite extensions
       Many  of  these  commands  have  the  ability  to   load   additional   SQLite   extensions   using   the
       --load-extension=/path/to/extension  option  -  use  --help  to check for support, e.g. sqlite-utils rows
       --help.

       This option can be applied multiple times to load multiple extensions.

       Since SpatiaLite is commonly used with SQLite, the value  spatialite  is  special:  it  will  search  for
       SpatiaLite  in  the most common installation locations, saving you from needing to remember exactly where
       that module is located:

          sqlite-utils memory "select spatialite_version()" --load-extension=spatialite

          [{"spatialite_version()": "4.3.0a"}]

   SpatiaLite helpers
       SpatiaLite adds geographic capability to SQLite (similar  to  how  PostGIS  builds  on  PostgreSQL).  The
       SpatiaLite cookbook is a good resource for learning what's possible with it.

       You can convert an existing table to a geographic table by adding a geometry column, use the sqlite-utils
       add-geometry-column command:

          sqlite-utils add-geometry-column spatial.db locations geometry --type POLYGON --srid 4326

       The  table  (locations  in  the  example  above)  must already exist before adding a geometry column. Use
       sqlite-utils create-table first, then add-geometry-column.

       Use the --type option to specify  a  geometry  type.  By  default,  add-geometry-column  uses  a  generic
       GEOMETRY, which will work with any type, though it may not be supported by some desktop GIS applications.

       Eight (case-insensitive) types are allowed:

       • POINT

       • LINESTRING

       • POLYGON

       • MULTIPOINT

       • MULTILINESTRING

       • MULTIPOLYGON

       • GEOMETRYCOLLECTION

       • GEOMETRY

   Adding spatial indexes
       Once you have a geometry column, you can speed up bounding box queries by adding a spatial index:

          sqlite-utils create-spatial-index spatial.db locations geometry

       See this SpatiaLite Cookbook recipe for examples of how to use a spatial index.

   Installing packages
       The  convert  command  and  the  insert  -\-convert and query -\-functions options can be provided with a
       Python script that imports additional modules from the sqlite-utils environment.

       You can install packages from PyPI directly into  the  correct  environment  using  sqlite-utils  install
       <package>. This is a wrapper around pip install.

          sqlite-utils install beautifulsoup4

       Use -U to upgrade an existing package.

   Uninstalling packages
       You  can  uninstall  packages  that were installed using sqlite-utils install with sqlite-utils uninstall
       <package>:

          sqlite-utils uninstall beautifulsoup4

       Use -y to skip the request for confirmation.

   Experimental TUI
       A TUI is a "text user interface" (or "terminal user interface") - a keyboard and mouse  driven  graphical
       interface running in your terminal.

       sqlite-utils  has  experimental  support for a TUI for building command-line invocations, built on top of
       the Trogon TUI library.

       To enable this feature you will need to install the trogon dependency. You can do that like so:

          sqite-utils install trogon

       Once installed, running the sqlite-utils tui command will launch the TUI interface:

          sqlite-utils tui

       You can then construct a command by selecting options from  the  menus,  and  execute  it  using  Ctrl+R.
       [image:  A  TUI  interface  for  sqlite-utils - the left column shows a list of commands, while the right
       panel has a form for constructing arguments to the add-column command.]  [image]

   sqlite_utils Python libraryGetting startedConnecting to or creating a databaseAttaching additional databasesTracing queriesExecuting queriesdb.query(sql, params)db.execute(sql, params)Passing parametersAccessing tablesListing tablesListing viewsListing rowsCounting rowsListing rows with their primary keysRetrieving a specific recordShowing the schemaCreating tablesCustom column order and column typesExplicitly creating a tableCompound primary keysSpecifying foreign keysTable configuration optionsSetting defaults and not null constraintsRenaming a tableDuplicating tablesBulk insertsInsert-replacing dataUpdating a specific recordDeleting a specific recordDeleting multiple recordsUpserting dataConverting data in columnsWorking with lookup tablesCreating lookup tables explicitlyPopulating lookup tables automatically during insert/upsertWorking with many-to-many relationshipsUsing m2m and lookup tables togetherAnalyzing a columnAdding columnsAdding columns automatically on insert/updateAdding foreign key constraintsAdding multiple foreign key constraints at onceAdding indexes for all foreign keysDropping a table or viewTransforming a tableAltering column typesRenaming columnsDropping columnsChanging primary keysChanging not null statusAltering column defaultsChanging column orderAdding foreign key constraintsReplacing foreign key constraintsDropping foreign key constraintsCustom transformations with .transform_sql()Extracting columns into a separate tableSetting an ID based on the hash of the row contentsCreating viewsStoring JSONConverting column values using SQL functionsChecking the SQLite versionDumping the database to SQLIntrospecting tables and views.exists().count.columns.columns_dict.default_values.pks.use_rowid.foreign_keys.schema.strict.indexes.xindexes.triggers.triggers_dict.detect_fts().virtual_table_using.has_counts_triggersFull-text searchEnabling full-text search for a tableQuoting characters for use in searchSearching with table.search()Building SQL queries with table.search_sql()Rebuilding a full-text search tableOptimizing a full-text search tableCached table counts using triggersCreating indexesOptimizing index usage with ANALYZEVacuumWAL modeSuggesting column typesRegistering custom SQL functionsQuoting strings for use in SQLReading rows from a fileSetting the maximum CSV field size limitDetecting column types using TypeTrackerSpatiaLite helpersInitialize SpatiaLiteFinding SpatiaLiteAdding geometry columnsCreating a spatial index

   Getting started
       Here's how to create a new SQLite database file containing a new  chickens  table,  populated  with  four
       records:

          from sqlite_utils import Database

          db = Database("chickens.db")
          db["chickens"].insert_all([{
              "name": "Azi",
              "color": "blue",
          }, {
              "name": "Lila",
              "color": "blue",
          }, {
              "name": "Suna",
              "color": "gold",
          }, {
              "name": "Cardi",
              "color": "black",
          }])

       You can loop through those rows like this:

          for row in db["chickens"].rows:
              print(row)

       Which outputs the following:

          {'name': 'Azi', 'color': 'blue'}
          {'name': 'Lila', 'color': 'blue'}
          {'name': 'Suna', 'color': 'gold'}
          {'name': 'Cardi', 'color': 'black'}

       To run a SQL query, use db.query():

          for row in db.query("""
              select color, count(*)
              from chickens group by color
              order by count(*) desc
          """):
              print(row)

       Which outputs:

          {'color': 'blue', 'count(*)': 2}
          {'color': 'gold', 'count(*)': 1}
          {'color': 'black', 'count(*)': 1}

   Connecting to or creating a database
       Database  objects  are  constructed  by passing in either a path to a file on disk or an existing SQLite3
       database connection:

          from sqlite_utils import Database

          db = Database("my_database.db")

       This will create my_database.db if it does not already exist.

       If you want to recreate a database from scratch (first removing the existing file from disk if it already
       exists) you can use the recreate=True argument:

          db = Database("my_database.db", recreate=True)

       Instead of a file path you can pass in an existing SQLite connection:

          import sqlite3

          db = Database(sqlite3.connect("my_database.db"))

       If you want to create an in-memory database, you can do so like this:

          db = Database(memory=True)

       You can also create a named in-memory database. Unlike regular memory databases these can be accessed  by
       multiple  threads,  provided  at  least one reference to the database still exists. del db will clear the
       database from memory.

          db = Database(memory_name="my_shared_database")

       Connections use PRAGMA recursive_triggers=on by default. If you don't want to use recursive triggers  you
       can turn them off using:

          db = Database(memory=True, recursive_triggers=False)

       By  default,  any  sqlite-utils plugins that implement the prepare_connection(conn) hook will be executed
       against the connection when you create the Database object. You can opt out of  executing  plugins  using
       execute_plugins=False like this:

          db = Database(memory=True, execute_plugins=False)

   Attaching additional databases
       SQLite  supports  cross-database  SQL  queries, which can join data from tables in more than one database
       file.

       You can attach an additional database using the .attach() method, providing an  alias  to  use  for  that
       database and the path to the SQLite file on disk.

          db = Database("first.db")
          db.attach("second", "second.db")
          # Now you can run queries like this one:
          print(db.query("""
          select * from table_in_first
              union all
          select * from second.table_in_second
          """))

       You  can  reference  tables in the attached database using the alias value you passed to db.attach(alias,
       filepath) as a prefix, for example the second.table_in_second reference in the SQL query above.

   Tracing queries
       You can use the tracer mechanism to see SQL queries that are being executed by  SQLite.  A  tracer  is  a
       function  that you provide which will be called with sql and params arguments every time SQL is executed,
       for example:

          def tracer(sql, params):
              print("SQL: {} - params: {}".format(sql, params))

       You can pass this function to the Database() constructor like so:

          db = Database(memory=True, tracer=tracer)

       You can also turn on a tracer function temporarily for a block of  code  using  the  with  db.tracer(...)
       context manager:

          db = Database(memory=True)
          # ... later
          with db.tracer(print):
              db["dogs"].insert({"name": "Cleo"})

       This example will print queries only for the duration of the with block.

   Executing queries
       The Database class offers several methods for directly executing SQL queries.

   db.query(sql, params)
       The  db.query(sql)  function  executes  a  SQL  query  and  returns  an iterator over Python dictionaries
       representing the resulting rows:

          db = Database(memory=True)
          db["dogs"].insert_all([{"name": "Cleo"}, {"name": "Pancakes"}])
          for row in db.query("select * from dogs"):
              print(row)
          # Outputs:
          # {'name': 'Cleo'}
          # {'name': 'Pancakes'}

   db.execute(sql, params)
       The db.execute() and db.executescript() methods provide wrappers around .execute()  and  .executescript()
       on  the  underlying  SQLite  connection.  These  wrappers  log  to  the  tracer  function if one has been
       registered.

       db.execute(sql) returns a sqlite3.Cursor that was used to execute the SQL.

          db = Database(memory=True)
          db["dogs"].insert({"name": "Cleo"})
          cursor = db.execute("update dogs set name = 'Cleopaws'")
          print(cursor.rowcount)
          # Outputs the number of rows affected by the update
          # In this case 2

       Other cursor methods such as .fetchone() and .fetchall() are also available,  see  the  standard  library
       documentation.

   Passing parameters
       Both  db.query()  and  db.execute() accept an optional second argument for parameters to be passed to the
       SQL query.

       This can take the form of either a tuple/list or a dictionary, depending on the type of  parameters  used
       in the query. Values passed in this way will be correctly quoted and escaped, helping avoid SQL injection
       vulnerabilities.

       ? parameters in the SQL query can be filled in using a list:

          db.execute("update dogs set name = ?", ["Cleopaws"])
          # This will rename ALL dogs to be called "Cleopaws"

       Named parameters using :name can be filled using a dictionary:

          dog = next(db.query(
              "select rowid, name from dogs where name = :name",
              {"name": "Cleopaws"}
          ))
          # dog is now {'rowid': 1, 'name': 'Cleopaws'}

       In  this  example  next() is used to retrieve the first result in the iterator returned by the db.query()
       method.

   Accessing tables
       Tables are accessed using the indexing operator, like so:

          table = db["my_table"]

       If the table does not yet exist, it will be created the first time you attempt to insert or  upsert  data
       into it.

       You can also access tables using the .table() method like so:

          table = db.table("my_table")

       Using this factory function allows you to set Table configuration options.

   Listing tables
       You can list the names of tables in a database using the .table_names() method:

          >>> db.table_names()
          ['dogs']

       To see just the FTS4 tables, use .table_names(fts4=True). For FTS5, use .table_names(fts5=True).

       You can also iterate through the table objects themselves using the .tables property:

          >>> db.tables
          [<Table dogs>]

   Listing views
       .view_names() shows you a list of views in the database:

          >>> db.view_names()
          ['good_dogs']

       You can iterate through view objects using the .views property:

          >>> db.views
          [<View good_dogs>]

       View  objects  are similar to Table objects, except that any attempts to insert or update data will throw
       an error. The full list of methods and properties available on a view object is as follows:

       • columnscolumns_dictcountschemarowsrows_where(where, where_args, order_by, select)drop()

   Listing rows
       To iterate through dictionaries for each of the rows in a table, use .rows:

          >>> db = sqlite_utils.Database("dogs.db")
          >>> for row in db["dogs"].rows:
          ...     print(row)
          {'id': 1, 'age': 4, 'name': 'Cleo'}
          {'id': 2, 'age': 2, 'name': 'Pancakes'}

       You can filter rows by a WHERE clause using .rows_where(where, where_args):

          >>> db = sqlite_utils.Database("dogs.db")
          >>> for row in db["dogs"].rows_where("age > ?", [3]):
          ...     print(row)
          {'id': 1, 'age': 4, 'name': 'Cleo'}

       The first argument is a fragment of SQL. The second, optional argument is values to  be  passed  to  that
       fragment  -  you  can  use  ? placeholders and pass an array, or you can use :named parameters and pass a
       dictionary, like this:

          >>> for row in db["dogs"].rows_where("age > :age", {"age": 3}):
          ...     print(row)
          {'id': 1, 'age': 4, 'name': 'Cleo'}

       To return custom columns (instead of the default that uses select *) pass select="column1, column2":

          >>> db = sqlite_utils.Database("dogs.db")
          >>> for row in db["dogs"].rows_where(select='name, age'):
          ...     print(row)
          {'name': 'Cleo', 'age': 4}

       To specify an order, use the order_by= argument:

          >>> for row in db["dogs"].rows_where("age > 1", order_by="age"):
          ...     print(row)
          {'id': 2, 'age': 2, 'name': 'Pancakes'}
          {'id': 1, 'age': 4, 'name': 'Cleo'}

       You can use order_by="age desc" for descending order.

       You can order all records in the table by excluding the where argument:

          >>> for row in db["dogs"].rows_where(order_by="age desc"):
          ...     print(row)
          {'id': 1, 'age': 4, 'name': 'Cleo'}
          {'id': 2, 'age': 2, 'name': 'Pancakes'}

       This method also accepts offset= and limit= arguments, for specifying an OFFSET and a LIMIT for  the  SQL
       query:

          >>> for row in db["dogs"].rows_where(order_by="age desc", limit=1):
          ...     print(row)
          {'id': 1, 'age': 4, 'name': 'Cleo'}

   Counting rows
       To  count  the  number  of  rows  that  would  be  returned  by  a  where filter, use .count_where(where,
       where_args):

       >>> db["dogs"].count_where("age > ?", [1])
       2

   Listing rows with their primary keys
       Sometimes it can be useful to retrieve the primary key along with each row, in order to pass that key (or
       primary key tuple) to the .get() or .update() methods.

       The .pks_and_rows_where() method takes the same signature as .rows_where() (with  the  exception  of  the
       select= parameter) but returns a generator that yields pairs of (primary key, row dictionary).

       The  primary key value will usually be a single value but can also be a tuple if the table has a compound
       primary key.

       If the table is a rowid table (with no explicit primary key column) then that ID will be returned.

          >>> db = sqlite_utils.Database(memory=True)
          >>> db["dogs"].insert({"name": "Cleo"})
          >>> for pk, row in db["dogs"].pks_and_rows_where():
          ...     print(pk, row)
          1 {'rowid': 1, 'name': 'Cleo'}

          >>> db["dogs_with_pk"].insert({"id": 5, "name": "Cleo"}, pk="id")
          >>> for pk, row in db["dogs_with_pk"].pks_and_rows_where():
          ...     print(pk, row)
          5 {'id': 5, 'name': 'Cleo'}

          >>> db["dogs_with_compound_pk"].insert(
          ...     {"species": "dog", "id": 3, "name": "Cleo"},
          ...     pk=("species", "id")
          ... )
          >>> for pk, row in db["dogs_with_compound_pk"].pks_and_rows_where():
          ...     print(pk, row)
          ('dog', 3) {'species': 'dog', 'id': 3, 'name': 'Cleo'}

   Retrieving a specific record
       You can retrieve a record by its primary key using table.get():

          >>> db = sqlite_utils.Database("dogs.db")
          >>> print(db["dogs"].get(1))
          {'id': 1, 'age': 4, 'name': 'Cleo'}

       If the table has a compound primary key you can pass in the primary key values as a tuple:

          >>> db["compound_dogs"].get(("mixed", 3))

       If the record does not exist a NotFoundError will be raised:

          from sqlite_utils.db import NotFoundError

          try:
              row = db["dogs"].get(5)
          except NotFoundError:
              print("Dog not found")

   Showing the schema
       The db.schema property returns the full SQL schema for the database as a string:

          >>> db = sqlite_utils.Database("dogs.db")
          >>> print(db.schema)
          CREATE TABLE "dogs" (
              [id] INTEGER PRIMARY KEY,
              [name] TEXT
          );

   Creating tables
       The easiest way to create a new table is to insert a record into it:

          from sqlite_utils import Database
          import sqlite3

          db = Database("dogs.db")
          dogs = db["dogs"]
          dogs.insert({
              "name": "Cleo",
              "twitter": "cleopaws",
              "age": 3,
              "is_good_dog": True,
          })

       This will automatically create a new table called "dogs" with the following schema:

          CREATE TABLE dogs (
              name TEXT,
              twitter TEXT,
              age INTEGER,
              is_good_dog INTEGER
          )

       You can also specify a primary key by passing the pk= parameter to the .insert() call. This will only  be
       obeyed if the record being inserted causes the table to be created:

          dogs.insert({
              "id": 1,
              "name": "Cleo",
              "twitter": "cleopaws",
              "age": 3,
              "is_good_dog": True,
          }, pk="id")

       After  inserting  a  row like this, the dogs.last_rowid property will return the SQLite rowid assigned to
       the most recently inserted record.

       The dogs.last_pk property will return the last inserted primary key value, if you specified one. This can
       be very useful when writing code that creates foreign keys or many-to-many relationships.

   Custom column order and column types
       The order of the columns in the table will be derived from the order  of  the  keys  in  the  dictionary,
       provided you are using Python 3.6 or later.

       If you want to explicitly set the order of the columns you can do so using the column_order= parameter:

          db["dogs"].insert({
              "id": 1,
              "name": "Cleo",
              "twitter": "cleopaws",
              "age": 3,
              "is_good_dog": True,
          }, pk="id", column_order=("id", "twitter", "name"))

       You don't need to pass all of the columns to the column_order parameter. If you only pass a subset of the
       columns the remaining columns will be ordered based on the key order of the dictionary.

       Column  types  are  detected  based  on  the  example  data  provided. Sometimes you may find you need to
       over-ride these detected types - to create an integer column for data that was provided as a  string  for
       example,  or to ensure that a table where the first example was None is created as an INTEGER rather than
       a TEXT column. You can do this using the columns= parameter:

          db["dogs"].insert({
              "id": 1,
              "name": "Cleo",
              "age": "5",
          }, pk="id", columns={"age": int, "weight": float})

       This will create a table with the following schema:

          CREATE TABLE [dogs] (
              [id] INTEGER PRIMARY KEY,
              [name] TEXT,
              [age] INTEGER,
              [weight] FLOAT
          )

   Explicitly creating a table
       You can directly create a new table without inserting any data into it using the .create() method:

          db["cats"].create({
              "id": int,
              "name": str,
              "weight": float,
          }, pk="id")

       The first argument here is a dictionary specifying the columns you would like to create. Each  column  is
       paired with a Python type indicating the type of column. See Adding columns for full details on how these
       types work.

       This   method   takes   optional   arguments   pk=,   column_order=,  foreign_keys=,  not_null=set()  and
       defaults=dict() - explained below.

       A sqlite_utils.utils.sqlite3.OperationalError will be raised if a table of that name already exists.

       You can pass ignore=True to ignore that error. You can also use if_not_exists=True to use the SQL  CREATE
       TABLE IF NOT EXISTS pattern to achieve the same effect:

          db["cats"].create({
              "id": int,
              "name": str,
          }, pk="id", if_not_exists=True)

       To  drop  and  replace  any existing table of that name, pass replace=True. This is a dangerous operation
       that will result in loss of existing data in the table.

       You can also pass transform=True to have  any  existing  tables  transformed  to  match  your  new  table
       specification.  This  is  a dangerous operation as it will drop columns that are no longer listed in your
       call to .create(), so be careful when running this.

          db["cats"].create({
              "id": int,
              "name": str,
              "weight": float,
          }, pk="id", transform=True)

       The transform=True option will update the table schema if any of the following have changed:

       • The specified columns or their types

       • The specified primary key

       • The order of the columns, defined using column_order=

       • The not_null= or defaults= arguments

       Changes to foreign_keys= are not currently detected and applied by transform=True.

   Compound primary keys
       If you want to create a table with a compound primary key that spans multiple columns, you can do  so  by
       passing a tuple of column names to any of the methods that accept a pk= parameter. For example:

          db["cats"].create({
              "id": int,
              "breed": str,
              "name": str,
              "weight": float,
          }, pk=("breed", "id"))

       This also works for the .insert(), .insert_all(), .upsert() and .upsert_all() methods.

   Specifying foreign keys
       Any  operation that can create a table (.create(), .insert(), .insert_all(), .upsert() and .upsert_all())
       accepts an optional foreign_keys= argument which can be used to set up foreign key  constraints  for  the
       table that is being created.

       If  you  are  using your database with Datasette, Datasette will detect these constraints and use them to
       generate hyperlinks to associated records.

       The foreign_keys argument takes a list that indicates which foreign keys should be created. The list  can
       take several forms. The simplest is a list of columns:

          foreign_keys=["author_id"]

       The  library  will  guess  which  tables  you wish to reference based on the column names using the rules
       described in Adding foreign key constraints.

       You can also be more explicit, by passing in a list of tuples:

          foreign_keys=[
              ("author_id", "authors", "id")
          ]

       This means that the author_id column should be a foreign key that references the id column in the authors
       table.

       You can leave off the third item in the tuple to have the referenced  column  automatically  set  to  the
       primary key of that table. A full example:

          db["authors"].insert_all([
              {"id": 1, "name": "Sally"},
              {"id": 2, "name": "Asheesh"}
          ], pk="id")
          db["books"].insert_all([
              {"title": "Hedgehogs of the world", "author_id": 1},
              {"title": "How to train your wolf", "author_id": 2},
          ], foreign_keys=[
              ("author_id", "authors")
          ])

   Table configuration options
       The  .insert(),  .upsert(),  .insert_all()  and  .upsert_all()  methods  each  take  a  number of keyword
       arguments, some of which influence what happens should they cause a table to be created and some of which
       affect the behavior of those methods.

       You can set default values for these methods by accessing the  table  through  the  db.table(...)  method
       (instead of using db["table_name"]), like so:

          table = db.table(
              "authors",
              pk="id",
              not_null={"name", "score"},
              column_order=("id", "name", "score", "url")
          )
          # Now you can call .insert() like so:
          table.insert({"id": 1, "name": "Tracy", "score": 5})

       The configuration options that can be specified in this way are pk, foreign_keys, column_order, not_null,
       defaults,  batch_size,  hash_id, hash_id_columns, alter, ignore, replace, extracts, conversions, columns.
       These are all documented below.

   Setting defaults and not null constraints
       Each of the methods that can cause a table to be  created  take  optional  arguments  not_null=set()  and
       defaults=dict(). The methods that take these optional arguments are:

       • db.create_table(...)table.create(...)table.insert(...)table.insert_all(...)table.upsert(...)table.upsert_all(...)

       You  can  use  not_null= to pass a set of column names that should have a NOT NULL constraint set on them
       when they are created.

       You can use defaults= to pass a dictionary mapping columns to the default value that should be  specified
       in the CREATE TABLE statement.

       Here's an example that uses these features:

          db["authors"].insert_all(
              [{"id": 1, "name": "Sally", "score": 2}],
              pk="id",
              not_null={"name", "score"},
              defaults={"score": 1},
          )
          db["authors"].insert({"name": "Dharma"})

          list(db["authors"].rows)
          # Outputs:
          # [{'id': 1, 'name': 'Sally', 'score': 2},
          #  {'id': 3, 'name': 'Dharma', 'score': 1}]
          print(db["authors"].schema)
          # Outputs:
          # CREATE TABLE [authors] (
          #     [id] INTEGER PRIMARY KEY,
          #     [name] TEXT NOT NULL,
          #     [score] INTEGER NOT NULL DEFAULT 1
          # )

   Renaming a table
       The db.rename_table(old_name, new_name) method can be used to rename a table:

          db.rename_table("my_table", "new_name_for_my_table")

       This executes the following SQL:

          ALTER TABLE [my_table] RENAME TO [new_name_for_my_table]

   Duplicating tables
       The  table.duplicate()  method  creates a copy of the table, copying both the table schema and all of the
       rows in that table:

          db["authors"].duplicate("authors_copy")

       The new authors_copy table will now contain a duplicate copy of the data from authors.

       This method raises sqlite_utils.db.NoTable if the table does not exist.

   Bulk inserts
       If you have more than one record to insert, the insert_all() method is  a  much  more  efficient  way  of
       inserting  them.  Just like insert() it will automatically detect the columns that should be created, but
       it will inspect the first batch of 100 items to help decide what those column types should be.

       Use it like this:

          db["dogs"].insert_all([{
              "id": 1,
              "name": "Cleo",
              "twitter": "cleopaws",
              "age": 3,
              "is_good_dog": True,
          }, {
              "id": 2,
              "name": "Marnie",
              "twitter": "MarnieTheDog",
              "age": 16,
              "is_good_dog": True,
          }], pk="id", column_order=("id", "twitter", "name"))

       The column types used in the CREATE TABLE statement are automatically derived from the types of  data  in
       that   first   batch   of   rows.   Any   additional   columns   in   subsequent  batches  will  cause  a
       sqlite3.OperationalError exception to be raised unless the alter=True argument is supplied, in which case
       the new columns will be created.

       The function can accept an iterator or generator of rows and will commit  them  according  to  the  batch
       size. The default batch size is 100, but you can specify a different size using the batch_size parameter:

          db["big_table"].insert_all(({
              "id": 1,
              "name": "Name {}".format(i),
          } for i in range(10000)), batch_size=1000)

       You  can  skip  inserting any records that have a primary key that already exists using ignore=True. This
       works with both .insert({...}, ignore=True) and .insert_all([...], ignore=True).

       You can delete all the existing rows in the table before inserting the new records  using  truncate=True.
       This is useful if you want to replace the data in the table.

       Pass analyze=True to run ANALYZE against the table after inserting the new records.

   Insert-replacing data
       If  you try to insert data using a primary key that already exists, the .insert() or .insert_all() method
       will raise a sqlite3.IntegrityError exception.

       This example that catches that exception:

          from sqlite_utils.utils import sqlite3

          try:
              db["dogs"].insert({"id": 1, "name": "Cleo"}, pk="id")
          except sqlite3.IntegrityError:
              print("Record already exists with that primary key")

       Importing from sqlite_utils.utils.sqlite3 ensures your code continues to work even if you are  using  the
       pysqlite3 library instead of the Python standard library sqlite3 module.

       Use the ignore=True parameter to ignore this error:

          # This fails silently if a record with id=1 already exists
          db["dogs"].insert({"id": 1, "name": "Cleo"}, pk="id", ignore=True)

       To  replace  any  existing  records  that  have a matching primary key, use the replace=True parameter to
       .insert() or .insert_all():

          db["dogs"].insert_all([{
              "id": 1,
              "name": "Cleo",
              "twitter": "cleopaws",
              "age": 3,
              "is_good_dog": True,
          }, {
              "id": 2,
              "name": "Marnie",
              "twitter": "MarnieTheDog",
              "age": 16,
              "is_good_dog": True,
          }], pk="id", replace=True)

       NOTE:
          Prior  to  sqlite-utils  2.0  the  .upsert()  and  .upsert_all()  methods  worked  the  same  way   as
          .insert(replace=True) does today. See Upserting data for the new behaviour of those methods introduced
          in 2.0.

   Updating a specific record
       You can update a record by its primary key using table.update():

          >>> db = sqlite_utils.Database("dogs.db")
          >>> print(db["dogs"].get(1))
          {'id': 1, 'age': 4, 'name': 'Cleo'}
          >>> db["dogs"].update(1, {"age": 5})
          >>> print(db["dogs"].get(1))
          {'id': 1, 'age': 5, 'name': 'Cleo'}

       The  first  argument to update() is the primary key. This can be a single value, or a tuple if that table
       has a compound primary key:

          >>> db["compound_dogs"].update((5, 3), {"name": "Updated"})

       The second argument is a dictionary of columns that should be updated, along with their new values.

       You can cause any missing columns to be added automatically using alter=True:

          >>> db["dogs"].update(1, {"breed": "Mutt"}, alter=True)

   Deleting a specific record
       You can delete a record using table.delete():

          >>> db = sqlite_utils.Database("dogs.db")
          >>> db["dogs"].delete(1)

       The delete() method takes the primary key of the record. This can be a tuple of values if the row  has  a
       compound primary key:

          >>> db["compound_dogs"].delete((5, 3))

   Deleting multiple records
       You can delete all records in a table that match a specific WHERE statement using table.delete_where():

          >>> db = sqlite_utils.Database("dogs.db")
          >>> # Delete every dog with age less than 3
          >>> db["dogs"].delete_where("age < ?", [3])

       Calling table.delete_where() with no other arguments will delete every row in the table.

       Pass analyze=True to run ANALYZE against the table after deleting the rows.

   Upserting data
       Upserting  allows  you  to insert records if they do not exist and update them if they DO exist, based on
       matching against their primary key.

       For example, given the dogs database you could upsert the record for Cleo like so:

          db["dogs"].upsert({
              "id": 1,
              "name": "Cleo",
              "twitter": "cleopaws",
              "age": 4,
              "is_good_dog": True,
          }, pk="id", column_order=("id", "twitter", "name"))

       If a record exists with id=1, it will be updated to match those fields. If it does not exist it  will  be
       created.

       Any  existing columns that are not referenced in the dictionary passed to .upsert() will be unchanged. If
       you want to replace a record entirely, use .insert(doc, replace=True) instead.

       Note that the pk and column_order parameters here are optional if you are  certain  that  the  table  has
       already  been  created.  You  should pass them if the table may not exist at the time the first upsert is
       performed.

       An upsert_all() method is also available, which behaves like insert_all() but performs upserts instead.

       NOTE:
          .upsert()  and  .upsert_all()  in  sqlite-utils  1.x  worked  like  .insert(...,   replace=True)   and
          .insert_all(..., replace=True) do in 2.x. See issue #66 for details of this change.

   Converting data in columns
       The  table.convert(...)  method  can  be  used  to apply a conversion function to the values in a column,
       either to update that column or to populate new columns. It is  the  Python  library  equivalent  of  the
       sqlite-utils convert command.

       This  feature  works  by  registering a custom SQLite function that applies a Python transformation, then
       running a SQL query equivalent to UPDATE table SET column = convert_value(column);

       To transform a specific column to uppercase, you would use the following:

          db["dogs"].convert("name", lambda value: value.upper())

       You can pass a list of columns, in which case the transformation will be applied to each one:

          db["dogs"].convert(["name", "twitter"], lambda value: value.upper())

       To save the output to of the transformation to a different column, use the output= parameter:

          db["dogs"].convert("name", lambda value: value.upper(), output="name_upper")

       This will add the new column, if it does not already exist. You can pass output_type=int  or  some  other
       type to control the type of the new column - otherwise it will default to text.

       If  you  want  to  drop  the  original  column after saving the results in a separate output column, pass
       drop=True.

       By default any rows with a falsey value for the column - such as 0  or  None  -  will  be  skipped.  Pass
       skip_false=False to disable this behaviour.

       You  can  create  multiple  new columns from a single input column by passing multi=True and a conversion
       function that returns a Python dictionary. This example creates new upper  and  lower  columns  populated
       from the single title column:

          table.convert(
              "title", lambda v: {"upper": v.upper(), "lower": v.lower()}, multi=True
          )

       The  .convert()  method accepts optional where= and where_args= parameters which can be used to apply the
       conversion to a subset of rows specified by a where clause. Here's how to apply the  conversion  only  to
       rows with an id that is higher than 20:

          table.convert("title", lambda v: v.upper(), where="id > :id", where_args={"id": 20})

       These  behave  the  same  as  the  corresponding parameters to the .rows_where() method, so you can use ?
       placeholders and a list of values instead of :named placeholders with a dictionary.

   Working with lookup tables
       A useful pattern when populating large tables in to break common values out into lookup tables.  Consider
       a table of Trees, where each tree has a species. Ideally these species would be split out into a separate
       Species  table, with each one assigned an integer primary key that can be referenced from the Trees table
       species_id column.

   Creating lookup tables explicitly
       Calling db["Species"].lookup({"name": "Palm"}) creates a table called Species (if one  does  not  already
       exist)  with  two columns: id and name. It sets up a unique constraint on the name column to guarantee it
       will not contain duplicate rows. It then inserts a new row with the name set to Palm and returns the  new
       integer primary key value.

       If the Species table already exists, it will insert the new row and return the primary key. If a row with
       that name already exists, it will return the corresponding primary key value directly.

       If  you call .lookup() against an existing table without the unique constraint it will attempt to add the
       constraint, raising an IntegrityError if the constraint cannot be created.

       If you pass in a dictionary with multiple values, both values will be used  to  insert  or  retrieve  the
       corresponding ID and any unique constraint that is created will cover all of those columns, for example:

          db["Trees"].insert({
              "latitude": 49.1265976,
              "longitude": 2.5496218,
              "species": db["Species"].lookup({
                  "common_name": "Common Juniper",
                  "latin_name": "Juniperus communis"
              })
          })

       The  .lookup()  method has an optional second argument which can be used to populate other columns in the
       table but only if the row does not exist yet. These columns will not be included in the unique index.

       To create a species record with a note on when it was first seen, you can use this:

          db["Species"].lookup({"name": "Palm"}, {"first_seen": "2021-03-04"})

       The first time this is called the record will be created for name="Palm". Any subsequent calls with  that
       name will ignore the second argument, even if it includes different values.

       .lookup() also accepts keyword arguments, which are passed through to the insert() method and can be used
       to influence the shape of the created table. Supported parameters are:

       • pk - which defaults to idforeign_keyscolumn_ordernot_nulldefaultsextractsconversionscolumns

   Populating lookup tables automatically during insert/upsert
       A more efficient way to work with lookup tables is to define them using the extracts= parameter, which is
       accepted by .insert(), .upsert(), .insert_all(), .upsert_all() and by the .table(...) factory function.

       extracts=  specifies columns which should be "extracted" out into a separate lookup table during the data
       insertion.

       It can be either a list of column names, in which case the extracted table names will  match  the  column
       names exactly, or it can be a dictionary mapping column names to the desired name of the extracted table.

       To extract the species column out to a separate Species table, you can do this:

          # Using the table factory
          trees = db.table("Trees", extracts={"species": "Species"})
          trees.insert({
              "latitude": 49.1265976,
              "longitude": 2.5496218,
              "species": "Common Juniper"
          })

          # If you want the table to be called 'species', you can do this:
          trees = db.table("Trees", extracts=["species"])

          # Using .insert() directly
          db["Trees"].insert({
              "latitude": 49.1265976,
              "longitude": 2.5496218,
              "species": "Common Juniper"
          }, extracts={"species": "Species"})

   Working with many-to-many relationships
       sqlite-utils includes a shortcut for creating records using many-to-many relationships in the form of the
       table.m2m(...) method.

       Here's how to create two new records and connect them via a many-to-many table in a single line of code:

          db["dogs"].insert({"id": 1, "name": "Cleo"}, pk="id").m2m(
              "humans", {"id": 1, "name": "Natalie"}, pk="id"
          )

       Running this example actually creates three tables: dogs, humans and a many-to-many dogs_humans table. It
       will insert a record into each of those tables.

       The  .m2m()  method  executes  against  the last record that was affected by .insert() or .update() - the
       record identified by the table.last_pk property. To execute .m2m() against  a  specific  record  you  can
       first select it by passing its primary key to .update():

          db["dogs"].update(1).m2m(
              "humans", {"id": 2, "name": "Simon"}, pk="id"
          )

       The  first argument to .m2m() can be either the name of a table as a string or it can be the table object
       itself.

       The second argument can be a single dictionary record or a list of dictionaries. These dictionaries  will
       be passed to .upsert() against the specified table.

       Here's alternative code that creates the dog record and adds two people to it:

          db = Database(memory=True)
          dogs = db.table("dogs", pk="id")
          humans = db.table("humans", pk="id")
          dogs.insert({"id": 1, "name": "Cleo"}).m2m(
              humans, [
                  {"id": 1, "name": "Natalie"},
                  {"id": 2, "name": "Simon"}
              ]
          )

       The  method  will  attempt to find an existing many-to-many table by looking for a table that has foreign
       key relationships against both of the tables in the relationship.

       If it cannot find such a table, it will create a new one using the names of the two tables -  dogs_humans
       in this example. You can customize the name of this table using the m2m_table= argument to .m2m().

       It  it  finds multiple candidate tables with foreign keys to both of the specified tables it will raise a
       sqlite_utils.db.NoObviousTable exception. You can avoid this error by specifying the correct table  using
       m2m_table=.

       The  .m2m()  method also takes an optional pk= argument to specify the primary key that should be used if
       the table is created, and an optional alter=True argument to specify  that  any  missing  columns  of  an
       existing table should be added if they are needed.

   Using m2m and lookup tables together
       You can work with (or create) lookup tables as part of a call to .m2m() using the lookup= parameter. This
       accepts  the  same argument as table.lookup() does - a dictionary of values that should be used to lookup
       or create a row in the lookup table.

       This example creates a dogs table, populates it, creates a characteristics table, populates that and sets
       up a many-to-many relationship between  the  two.  It  chains  .m2m()  twice  to  create  two  associated
       characteristics:

          db = Database(memory=True)
          dogs = db.table("dogs", pk="id")
          dogs.insert({"id": 1, "name": "Cleo"}).m2m(
              "characteristics", lookup={
                  "name": "Playful"
              }
          ).m2m(
              "characteristics", lookup={
                  "name": "Opinionated"
              }
          )

       You can inspect the database to see the results like this:

          >>> db.table_names()
          ['dogs', 'characteristics', 'characteristics_dogs']
          >>> list(db["dogs"].rows)
          [{'id': 1, 'name': 'Cleo'}]
          >>> list(db["characteristics"].rows)
          [{'id': 1, 'name': 'Playful'}, {'id': 2, 'name': 'Opinionated'}]
          >>> list(db["characteristics_dogs"].rows)
          [{'characteristics_id': 1, 'dogs_id': 1}, {'characteristics_id': 2, 'dogs_id': 1}]
          >>> print(db["characteristics_dogs"].schema)
          CREATE TABLE [characteristics_dogs] (
              [characteristics_id] INTEGER REFERENCES [characteristics]([id]),
              [dogs_id] INTEGER REFERENCES [dogs]([id]),
              PRIMARY KEY ([characteristics_id], [dogs_id])
          )

   Analyzing a column
       The table.analyze_column(column) method is used by the analyze-tables CLI command.

       It takes the following arguments and options:

       column - required
              The name of the column to analyze

       common_limit
              The number of most common values to return. Defaults to 10.

       value_truncate
              If set to an integer, values longer than this will be truncated to this length. Defaults to None.

       most_common
              If set to False, the most_common field of the returned ColumnDetails will be set to None. Defaults
              to True.

       least_common
              If  set  to  False,  the  least_common  field  of  the returned ColumnDetails will be set to None.
              Defaults to True.

       And returns a ColumnDetails named tuple with the following fields:

       table  The name of the table

       column The name of the column

       total_rows
              The total number of rows in the table

       num_null
              The number of rows for which this column is null

       num_blank
              The number of rows for which this column is blank (the empty string)

       num_distinct
              The number of distinct values in this column

       most_common
              The N most common values as a list of (value, count)  tuples`,  or  None  if  the  table  consists
              entirely of distinct values

       least_common
              The  N  least  common values as a list of (value, count) tuples`, or None if the table is entirely
              distinct or if the number of distinct values is less than N (since they  will  already  have  been
              returned in most_common)

   Adding columns
       You can add a new column to a table using the .add_column(col_name, col_type) method:

          db["dogs"].add_column("instagram", str)
          db["dogs"].add_column("weight", float)
          db["dogs"].add_column("dob", datetime.date)
          db["dogs"].add_column("image", "BLOB")
          db["dogs"].add_column("website") # str by default

       You  can  specify  the col_type argument either using a SQLite type as a string, or by directly passing a
       Python type e.g. str or float.

       The col_type is optional - if you omit it the type of TEXT will be used.

       SQLite types you can specify are "TEXT", "INTEGER", "FLOAT" or "BLOB".

       If you pass a Python type, it will be mapped to SQLite types as shown here:

          float: "FLOAT"
          int: "INTEGER"
          bool: "INTEGER"
          str: "TEXT"
          bytes: "BLOB"
          datetime.datetime: "TEXT"
          datetime.date: "TEXT"
          datetime.time: "TEXT"
          datetime.timedelta: "TEXT"

          # If numpy is installed
          np.int8: "INTEGER"
          np.int16: "INTEGER"
          np.int32: "INTEGER"
          np.int64: "INTEGER"
          np.uint8: "INTEGER"
          np.uint16: "INTEGER"
          np.uint32: "INTEGER"
          np.uint64: "INTEGER"
          np.float16: "FLOAT"
          np.float32: "FLOAT"
          np.float64: "FLOAT"

       You can also add a column that is a foreign key reference to another table using the fk parameter:

          db["dogs"].add_column("species_id", fk="species")

       This will automatically detect the name of the primary key on the species table and  use  that  (and  its
       type) for the new column.

       You can explicitly specify the column you wish to reference using fk_col:

          db["dogs"].add_column("species_id", fk="species", fk_col="ref")

       You can set a NOT NULL DEFAULT 'x' constraint on the new column using not_null_default:

          db["dogs"].add_column("friends_count", int, not_null_default=0)

   Adding columns automatically on insert/update
       You  can  insert or update data that includes new columns and have the table automatically altered to fit
       the new schema using the alter=True argument. This can be passed to all  four  of  .insert(),  .upsert(),
       .insert_all()  and .upsert_all(), or it can be passed to db.table(table_name, alter=True) to enable it by
       default for all method calls against that table instance.

          db["new_table"].insert({"name": "Gareth"})
          # This will throw an exception:
          db["new_table"].insert({"name": "Gareth", "age": 32})
          # This will succeed and add a new "age" integer column:
          db["new_table"].insert({"name": "Gareth", "age": 32}, alter=True)
          # You can see confirm the new column like so:
          print(db["new_table"].columns_dict)
          # Outputs this:
          # {'name': <class 'str'>, 'age': <class 'int'>}

          # This works too:
          new_table = db.table("new_table", alter=True)
          new_table.insert({"name": "Gareth", "age": 32, "shoe_size": 11})

   Adding foreign key constraints
       The SQLite ALTER TABLE statement doesn't have the ability to add foreign key references  to  an  existing
       column.

       The add_foreign_key() method here is a convenient wrapper around table.transform().

       It's   also   possible   to   add  foreign  keys  by  directly  updating  the  sqlite_master  table.  The
       sqlite-utils-fast-fks plugin implements this pattern, using code  that  was  included  with  sqlite-utils
       prior to version 3.35.

       Here's an example of this mechanism in action:

          db["authors"].insert_all([
              {"id": 1, "name": "Sally"},
              {"id": 2, "name": "Asheesh"}
          ], pk="id")
          db["books"].insert_all([
              {"title": "Hedgehogs of the world", "author_id": 1},
              {"title": "How to train your wolf", "author_id": 2},
          ])
          db["books"].add_foreign_key("author_id", "authors", "id")

       The  table.add_foreign_key(column,  other_table,  other_column)  method takes the name of the column, the
       table that is being referenced and the key column within that other table. If you omit  the  other_column
       argument the primary key from that table will be used automatically. If you omit the other_table argument
       the table will be guessed based on some simple rules:

       • If the column is of format author_id, look for tables called author or authors

       • If  the  column  does not end in _id, try looking for a table with the exact name of the column or that
         name with an added s

       This method first checks that the specified foreign key references tables and columns that exist and does
       not clash with an existing foreign key. It will raise a  sqlite_utils.db.AlterError  exception  if  these
       checks fail.

       To ignore the case where the key already exists, use ignore=True:

          db["books"].add_foreign_key("author_id", "authors", "id", ignore=True)

   Adding multiple foreign key constraints at once
       You  can use db.add_foreign_keys(...) to add multiple foreign keys in one go. This method takes a list of
       four-tuples, each one specifying a table, column, other_table and other_column.

       Here's an example adding two foreign keys at once:

          db.add_foreign_keys([
              ("dogs", "breed_id", "breeds", "id"),
              ("dogs", "home_town_id", "towns", "id")
          ])

       This method runs the same checks as .add_foreign_keys()  and  will  raise  sqlite_utils.db.AlterError  if
       those checks fail.

   Adding indexes for all foreign keys
       If  you  want to ensure that every foreign key column in your database has a corresponding index, you can
       do so like this:

          db.index_foreign_keys()

   Dropping a table or view
       You can drop a table or view using the .drop() method:

          db["my_table"].drop()

       Pass ignore=True if you want to ignore the error caused by the table or view not existing.

          db["my_table"].drop(ignore=True)

   Transforming a table
       The SQLite ALTER TABLE statement is limited. It can add and drop columns and rename tables, but it cannot
       change column types, change NOT NULL status or change the primary key for a table.

       The table.transform() method can do all of these things, by implementing a multi-step  pattern  described
       in the SQLite documentation:

       1. Start a transaction

       2. CREATE TABLE tablename_new_x123 with the required changes

       3. Copy the old data into the new table using INSERT INTO tablename_new_x123 SELECT * FROM tablename;

       4. DROP TABLE tablename;

       5. ALTER TABLE tablename_new_x123 RENAME TO tablename;

       6. Commit the transaction

       The .transform() method takes a number of parameters, all of which are optional.

       As  a bonus, calling .transform() will reformat the schema for the table that is stored in SQLite to make
       it more readable. This works even if you call it without any arguments.

       To keep the original table around instead of dropping it, pass the keep_table=  option  and  specify  the
       name of the table you would like it to be renamed to:

          table.transform(types={"age": int}, keep_table="original_table")

   Altering column types
       To alter the type of a column, use the types= argument:

          # Convert the 'age' column to an integer, and 'weight' to a float
          table.transform(types={"age": int, "weight": float})

       See Adding columns for a list of available types.

   Renaming columns
       The rename= parameter can rename columns:

          # Rename 'age' to 'initial_age':
          table.transform(rename={"age": "initial_age"})

   Dropping columns
       To drop columns, pass them in the drop= set:

          # Drop the 'age' column:
          table.transform(drop={"age"})

   Changing primary keys
       To  change the primary key for a table, use pk=. This can be passed a single column for a regular primary
       key, or a tuple of columns to create a compound primary key. Passing pk=None will remove the primary  key
       and convert the table into a rowid table.

          # Make `user_id` the new primary key
          table.transform(pk="user_id")

   Changing not null status
       You  can  change the NOT NULL status of columns by using not_null=. You can pass this a set of columns to
       make those columns NOT NULL:

          # Make the 'age' and 'weight' columns NOT NULL
          table.transform(not_null={"age", "weight"})

       If you want to take existing NOT NULL columns and change them to allow null values,  you  can  do  so  by
       passing a dictionary of true/false values instead:

          # 'age' is NOT NULL but we want to allow NULL:
          table.transform(not_null={"age": False})

          # Make age allow NULL and switch weight to being NOT NULL:
          table.transform(not_null={"age": False, "weight": True})

   Altering column defaults
       The defaults= parameter can be used to set or change the defaults for different columns:

          # Set default age to 1:
          table.transform(defaults={"age": 1})

          # Now remove the default from that column:
          table.transform(defaults={"age": None})

   Changing column order
       The  column_order=  parameter  can be used to change the order of the columns. If you pass the names of a
       subset of the columns those will go first and columns you omitted will appear  in  their  existing  order
       after them.

          # Change column order
          table.transform(column_order=("name", "age", "id")

   Adding foreign key constraints
       You can add one or more foreign key constraints to a table using the add_foreign_keys= parameter:

          db["places"].transform(
              add_foreign_keys=(
                  ("country", "country", "id"),
                  ("continent", "continent", "id")
              )
          )

       This  accepts the same arguments described in specifying foreign keys - so you can specify them as a full
       tuple of (column, other_table, other_column), or you can take a shortcut and pass just the  name  of  the
       column, provided the table can be automatically derived from the column name:

          db["places"].transform(
              add_foreign_keys=(("country", "continent"))
          )

   Replacing foreign key constraints
       The foreign_keys= parameter is similar to  to add_foreign_keys= but can be be used to replace all foreign
       key constraints on a table, dropping any that are not explicitly mentioned:

          db["places"].transform(
              foreign_keys=(
                  ("continent", "continent", "id"),
              )
          )

   Dropping foreign key constraints
       You can use .transform() to remove foreign key constraints from a table.

       This  example  drops  two  foreign  keys  -  the  one  from places.country to country.id and the one from
       places.continent to continent.id:

          db["places"].transform(
              drop_foreign_keys=("country", "continent")
          )

   Custom transformations with .transform_sql()
       The .transform() method can handle most cases, but it does not automatically upgrade  indexes,  views  or
       triggers associated with the table that is being transformed.

       If you want to do something more advanced, you can call the table.transform_sql(...) method with the same
       arguments that you would have passed to table.transform(...).

       This method will return a list of SQL statements that should be executed to implement the change. You can
       then make modifications to that SQL - or add additional SQL statements - before executing it yourself.

   Extracting columns into a separate table
       The table.extract() method can be used to extract specified columns into a separate table.

       Imagine a Trees table that looks like this:
                                           ┌────┬──────────────┬─────────┐
                                           │ id │ TreeAddress  │ Species │
                                           ├────┼──────────────┼─────────┤
                                           │ 1  │ 52 Vine St   │ Palm    │
                                           ├────┼──────────────┼─────────┤
                                           │ 2  │ 12 Draft St  │ Oak     │
                                           ├────┼──────────────┼─────────┤
                                           │ 3  │ 51 Dark Ave  │ Palm    │
                                           ├────┼──────────────┼─────────┤
                                           │ 4  │ 1252 Left St │ Palm    │
                                           └────┴──────────────┴─────────┘

       The  Species  column contains duplicate values. This database could be improved by extracting that column
       out into a separate Species table and pointing to it using a foreign key column.

       The schema of the above table is:

          CREATE TABLE [Trees] (
              [id] INTEGER PRIMARY KEY,
              [TreeAddress] TEXT,
              [Species] TEXT
          )

       Here's how to extract the Species column using .extract():

          db["Trees"].extract("Species")

       After running this code the table schema now looks like this:

          CREATE TABLE "Trees" (
              [id] INTEGER PRIMARY KEY,
              [TreeAddress] TEXT,
              [Species_id] INTEGER,
              FOREIGN KEY(Species_id) REFERENCES Species(id)
          )

       A new Species table will have been created with the following schema:

          CREATE TABLE [Species] (
              [id] INTEGER PRIMARY KEY,
              [Species] TEXT
          )

       The .extract() method defaults to creating a table with the same name as the column that  was  extracted,
       and adding a foreign key column called tablename_id.

       You  can  specify  a custom table name using table=, and a custom foreign key name using fk_column=. This
       example creates a table called tree_species and a foreign key column called tree_species_id:

          db["Trees"].extract("Species", table="tree_species", fk_column="tree_species_id")

       The resulting schema looks like this:

          CREATE TABLE "Trees" (
              [id] INTEGER PRIMARY KEY,
              [TreeAddress] TEXT,
              [tree_species_id] INTEGER,
              FOREIGN KEY(tree_species_id) REFERENCES tree_species(id)
          )

          CREATE TABLE [tree_species] (
              [id] INTEGER PRIMARY KEY,
              [Species] TEXT
          )

       You can also extract multiple columns into the same external table. Say for example you have a table like
       this:
                                    ──────────────────────────────────────────────
                                      id   TreeAddress    CommonName   LatinName
                                    ──────────────────────────────────────────────
                                      1    52 Vine St     Palm         Arecaceae
                                    ──────────────────────────────────────────────
                                      2    12 Draft St    Oak          Quercus
                                    ──────────────────────────────────────────────
                                      3    51 Dark Ave    Palm         Arecaceae
                                    ──────────────────────────────────────────────
                                      4    1252 Left St   Palm         Arecaceae
                                    ┌────┬──────────────┬────────────┬───────────┐
                                    │    │              │            │           │
--

AUTHOR

       Simon Willison

COPYRIGHT

       2018-2023, Simon Willison

                                                  Nov 08, 2023                                   SQLITE-UTILS(1)