-
Notifications
You must be signed in to change notification settings - Fork 909
How is it intended to do multi-threading with this library? #7284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
As arrow-rs is designed to be embedded in various different environments, it makes no assumptions about runtime environment, instead providing the raw primitives for people to use as appropriate. For reading parquet, row groups can be decoded in parallel. See https://docs.rs/parquet/latest/parquet/arrow/arrow_reader/struct.ArrowReaderBuilder.html#method.new_with_metadata For writing parquet see https://docs.rs/parquet/latest/parquet/arrow/arrow_writer/struct.ArrowColumnWriter.html These can be used with a threadpool like rayon or similar. If you'd prefer a more batteries included experience I would recommend looking at a fully fledged query engine, such as DataFusion, that wires these primitives up for you.
This is relatively non-trivial because of the way parquet and arrow's representation of nested data differ, however, one could use projections to process disjoint sets of columns in parallel. Processing row groups in parallel is the more common approach. |
Thank you a lot @tustvold , this is very helpful! Will have a look into all of this, wasn't aware that it's possible to split up writing columns to individual workers. |
DataFusion's writer includes multi-threaded writing of parquet files, FWIW As well as reads the parquet files in parallel |
@alamb where would I find this exactly? I found this for example: https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.write_table But I'd need something like a "PathBuf" -> "RecordBatch" and vice versa interface basically. |
What are you trying to do? Perhaps this is what you are looking for? https://docs.rs/datafusion/latest/datafusion/dataframe/struct.DataFrame.html#method.write_parquet |
Which part is this question about
Library API / UX
Describe your question
With a lot of functions in the
pyarrow
package, there is already some multithreading implemented for you.As far as I understand, reading from a file for example is multithreaded by letting each column be processed by a separate thread.
As far as I know, there is nothing comparable to that directly available for you in this Rust crate. What are users expected to do here?
For example, for simply reading a parquet file in a parallized manner, would one do something like this?
If there is nothing already offered for you in this crate that does this, should this maybe be part of this crate?
How could parallelized writes work? It's not easily possible to just write parquet files containing one column each and then merge it afterwards, right?
The text was updated successfully, but these errors were encountered: