Release 3.8.0 (42) - Writing CSV File to File System (46.26.0 // 1.168.0)
Modified on: Thu, 30 Jul, 2020 at 9:58 AM
In the following all features are listed which are (partly) included in the release. If no other open parts (stories) are left, the feature is finished and otherwise it is clarified as ongoing.
|Support for handling Zip-files in connections|
- Zip-files can be used in connections.
- Content of Zip-files can be listed in connection.
- Whole content will be listed.
- It is possible to merge n Zip-files to one datatable.
- 1-n files out of a Zip-file can be loaded as a single datatable by applying merge rules.
- Preview of the first 5 or more rows in datatable detail view.
|Finished parts in the release||Support zip files in connections|
- Merge rules can be created in the connection settings to define which files (in one or multiple Zips) can be merged and the name of the single resulting file.
- In the connection's file list page, the merged files and the original files are shown.
- Combination of files in zip's and files in folders can be merged via merge rules.
- Extend info-text: Zip's are treated as sub-folder.
- The merged file can be used with the Data Table Load Processor; the usage should be the same as with a Data Table from a single unmerged file out of a Zip.
- Merging should work with file sizes of at least 50 GB unziped (for example: 2 files with 25 GB each and 1000 files with 50 MB each).
|Writing .csv files to mounted filesystem|
|Goal||It is possible to exchange information/files with external tools via files.|
|Finished parts in the release||[Server] Writing .csv file to filesystem||It is possible to actually write files from a DataTable Save Processor to a configured file system connection using the provided file name using the following configurations:|
- Save mode: CREATE NEW or REPLACE
- Default delimiter is: ,
- Default string escape token: "
- Default escape token: \
- Default encoding: UTF-8
|[Client] Expert mode for processor configuration|
- It is possible to configure any processor completely in JSON (without any support whatsoever on the correctness of the JSON code).
- Expert mode can only be activated when processor is open. And is is activated for this special processor only.
- When expert mode is active, the config tab in the processor shows different icon.
|Auto completion for Data Tables|
|Goal||As a Data Analyst I want to query DataTables from the sql editor, where auto-completion on column names is supported.|
|Finished parts in the release||API endpoint to fetch DataTable columns on ONE DATA Server||API endpoint on OD server to provide the columns for a specified DataTable|
|Auto-Completion for Oracle Connections|
- Using LSP for communication between client and server
- Auto-completion should also show they type of the available resource (e.g. table, view, column type).
- Right now it is not yet possible to fetch connections from apps via a drop down → We need to "mock" them → Client sends the hart-coded Connection- and Key- ID to the server
|Finished parts in the release||API endpoint for database schemata of the ONE DATA server||As a developer of an LSP microservice for PL/SQL I need to be able to determine the database schema of an ETL connection without having to transfer sensitive credentials among microservices.|
Did you find it helpful?
Sorry we couldn't be helpful. Help us improve this article with your feedback.