libpqxx  7.3.1
pqxx::stream_to Class Reference

Efficiently write data directly to a database table. More...

#include <stream_to.hxx>

Inheritance diagram for pqxx::stream_to:

Public Member Functions

 stream_to (transaction_base &, std::string_view table_name)
 Create a stream, without specifying columns. More...
 
template<typename Columns >
 stream_to (transaction_base &, std::string_view table_name, Columns const &columns)
 Create a stream, specifying column names as a container of strings. More...
 
template<typename Iter >
 stream_to (transaction_base &, std::string_view table_name, Iter columns_begin, Iter columns_end)
 Create a stream, specifying column names as a sequence of strings. More...
 
 ~stream_to () noexcept
 
 operator bool () const noexcept
 Does this stream still need to complete()? More...
 
bool operator! () const noexcept
 Has this stream been through its concluding complete()? More...
 
void complete ()
 Complete the operation, and check for errors. More...
 
template<typename Row >
stream_tooperator<< (Row const &row)
 Insert a row of data. More...
 
stream_tooperator<< (stream_from &)
 Stream a stream_from straight into a stream_to. More...
 
template<typename Row >
void write_row (Row const &row)
 Insert a row of data, given in the form of a std::tuple or container. More...
 
template<typename... Ts>
void write_values (const Ts &...fields)
 Insert values as a row. More...
 

Detailed Description

Efficiently write data directly to a database table.

If you wish to insert rows of data into a table, you can compose INSERT statements and execute them. But it's slow and tedious, and you need to worry about quoting and escaping the data.

If you're just inserting a single row, it probably won't matter much. You can use prepared or parameterised statements to take care of the escaping for you. But if you're inserting large numbers of rows you will want something better.

Inserting rows one by one using INSERT statements involves a lot of pointless overhead, especially when you are working with a remote database server over the network. You may end up sending each row over the network as a separate query, and waiting for a reply. Do it "in bulk" using stream_to, and you may find that it goes many times faster. Sometimes you gain orders of magnitude in speed.

Here's how it works: you create a stream_to stream to start writing to your table. You will probably want to specify the columns. Then, you feed your data into the stream one row at a time. And finally, you call the stream's complete() to tell it to finalise the operation, wait for completion, and check for errors.

So how do you feed a row of data into the stream? There's several ways, but the preferred one is to call its write_values. Pass the field values as arguments. Doesn't matter what type they are, as long as libpqxx knows how to convert them to PostgreSQL's text format: int, std::string or std:string_view, float and double, bool... lots of basic types are supported. If some of the values are null, feel free to use std::optional, std::shared_ptr, or std::unique_ptr.

The arguments' types don't even have to match the fields' SQL types. If you want to insert an int into a DECIMAL column, that's your choice – it will produce a DECIMAL value which happens to be integral. Insert a float into a VARCHAR column? That's fine, you'll get a string whose contents happen to read like a number. And so on. You can even insert different types of value in the same column on different rows. If you have a code path where a particular field is always null, just insert nullptr.

There is another way to insert rows: the << ("shift-left") operator. It's not as fast and it doesn't support variable arguments: each row must be either a std::tuple or something iterable, such as a std::vector, or anything else with a begin and end.

Warning
While a stream is active, you cannot execute queries, open a pipeline, etc. on the same transaction. A transaction can have at most one object of a type derived from pqxx::internal::transactionfocus active on it at a time.

Constructor & Destructor Documentation

◆ stream_to() [1/3]

pqxx::stream_to::stream_to ( transaction_base tb,
std::string_view  table_name 
)

Create a stream, without specifying columns.

Fields will be inserted in whatever order the columns have in the database.

You'll probably want to specify the columns, so that the mapping between your data fields and the table is explicit in your code, and not hidden in an "implicit contract" between your code and your schema.

◆ stream_to() [2/3]

template<typename Columns >
pqxx::stream_to::stream_to ( transaction_base tb,
std::string_view  table_name,
Columns const &  columns 
)

Create a stream, specifying column names as a container of strings.

◆ stream_to() [3/3]

template<typename Iter >
pqxx::stream_to::stream_to ( transaction_base tb,
std::string_view  table_name,
Iter  columns_begin,
Iter  columns_end 
)

Create a stream, specifying column names as a sequence of strings.

References pqxx::separated_list().

◆ ~stream_to()

pqxx::stream_to::~stream_to ( )
noexcept

References complete().

Member Function Documentation

◆ complete()

void pqxx::stream_to::complete ( )

Complete the operation, and check for errors.

Always call this to close the stream in an orderly fashion, even after an error. (In the case of an error, abort the transaction afterwards.)

The only circumstance where it's safe to skip this is after an error, if you're discarding the entire connection.

Referenced by ~stream_to().

◆ operator bool()

pqxx::stream_to::operator bool ( ) const
noexcept

Does this stream still need to complete()?

◆ operator!()

bool pqxx::stream_to::operator! ( ) const
noexcept

Has this stream been through its concluding complete()?

◆ operator<<() [1/2]

template<typename Row >
stream_to& pqxx::stream_to::operator<< ( Row const &  row)

Insert a row of data.

Returns a reference to the stream, so you can chain the calls.

The row can be a tuple, or any type that can be iterated. Each item becomes a field in the row, in the same order as the columns you specified when creating the stream.

If you don't already happen to have your fields in the form of a tuple or container, prefer write_values. It's faster and more convenient.

References pqxx::operator<<().

◆ operator<<() [2/2]

pqxx::stream_to & pqxx::stream_to::operator<< ( stream_from tr)

Stream a stream_from straight into a stream_to.

This can be useful when copying between different databases. If the source and the destination are on the same database, you'll get better performance doing it all in a regular query.

References pqxx::stream_from::get_raw_line().

◆ write_row()

template<typename Row >
void pqxx::stream_to::write_row ( Row const &  row)

Insert a row of data, given in the form of a std::tuple or container.

The row can be a tuple, or any type that can be iterated. Each item becomes a field in the row, in the same order as the columns you specified when creating the stream.

The preferred way to insert a row is write_values.

◆ write_values()

template<typename... Ts>
void pqxx::stream_to::write_values ( const Ts &...  fields)

Insert values as a row.

This is the recommended way of inserting data. Pass your field values, of any convertible type.

References pqxx::is_null(), pqxx::size_buffer(), and pqxx::to_buf().


The documentation for this class was generated from the following files: