TABLE OF CONTENTS
- Motivation
- Overview
- Input
- Configuration
- Output
- JWT (JSON Web Token) for Authentication
- Example
- Related Articles
Motivation
With ONE DATA there are many ways to process data within a Workflow using different Processors. But sometimes, it is necessary or easier to use custom computation methods. A Python script, for instance, is a good way to customize the processing of data, due to the fact that Python is a highly flexible programming language with many open source libraries.
Since Python scripts are a very useful tool, it is possible to include them in Workflows with the ONE DATA Python Processors. In this article, the one that is focused on is the Python Script Single Input Processor.
Overview
The Python Script Single Input Processor takes a given input and executes the specified script on it. Then the processed data can be passed as output to other Processors. There are already some useful libraries included that could be necessary to write scripts for data science tasks. The info button on the left corner of the input box provides further information on what packages are preinstalled.
To interact with ONE DATA resources from within the Python script, for example loading Models, accessing Variables or specifying the output of the Processor, it is necessary to use the ONE DATA Python Framework.
For advanced usage of the ONE DATA Python Processors, it can be really helpful to have a deeper look into the framework. This article gives a small insight to it, but does not explain the framework in depth.
Input
The Processor takes any valid dataset as input. The Python script which can be used to process the input data can be inserted within the configuration:
In the "Input Name In Script" text field, it is possible to specify the name of the input dataset in the Python script. The default value is "input"
Then it can be saved into a variable, for example as Pandas Dataframe, like this:
dataset = od_input['input'].get_as_pandas()
or as 2D matrix like this:
matrix = od_input['input'].get_as_matrix()
Configuration
The Processor configuration gives some additional options on how the output data should be processed and some definitions for the script execution itself. It is also possible to load ONE DATA Python Models and use them in the script. All options are described more in-depth in the following sections.
Timeout for Script Execution
This is the time in seconds that ONE DATA waits for the script execution and the returning of the results of the script. The time starts when the Processor submits the Python script and the data to the Python Service of ONE DATA. If the timeout is exceeds, the calculation will be interrupted.
The default value is 300 seconds.
Generate Empty Dataset Output
This option defines if the Processor should generate an empty dataset after the execution of the script. This can be very useful, for example when the script is only used to generate a plot and no respective result dataset. This option can then be activated to prevent a Processor execution error, because by default, it requires an output dataset.
Manual TI
With this configuration option it is possible to specify what scale and representation type the columns of the output dataset have in order to provide the correct type inference in ONE DATA.
Possible scale types: nominal, interval, ordinal, ratio. Further information on scale types can be found here.
Possible representation types: string, int, double, datetime, numeric
If it is not possible to convert the values of a column to the specified representation type, the Processor will take the type that fits best for their representation. If types still do not fit the purpose, it is recommended to use the Data type Conversion Processor.
Load One or More Models
The first dropdown is used to select an existing Python Model from the current project. Its also possible to specify which version of the Model should be loaded.
With the "Open Model" button the view for the selected Model can be accessed directly from the Processor.
With the "Add Group" button multiple Models can be loaded.
To use it in the script itself, a selected Model can be stored in a variable like so:
model = od_models["model_name"]
Note that, a Model needs to have a unique name within a Domain in ONE DATA.
Save One or More Models
This configuration option is used to save a generated Python Model to the project, or adds a new version to a existing one.
It has three options:
- Create New Model: Creates a new Model, with the name specified in the textbox below. The name needs to be unique within a Domain.
- Add New Model Version: Adds a new version to an already existing Model which can be selected below.
- Create Or Add Version: With this option the Processor either adds a new version to the given Model, or creates a new one if the Model does not exist yet.
A Model can be saved using following script
od_output.add_model("my_model", model)
Save One or More Model Groups With Assigned Models
With this option, you can save a Model Group created by Python within ONE DATA. All Model Groups added in the Python script must be configured in here, otherwise they will not be saved to the ONE DATA environment. To save a Model assigned to a Model Group stored in a variable Model under the name "my_model" and the Model Group name \"my_model_group\" use the following statement in the script:
od_output.add_model("my_model", model, "my_model_group")
Note that, a Model Group needs to have a unique name within a Domain in ONE DATA.
Load One or More Model Groups
By using this option, it is possible to load Model Groups for Python execution. Models of all loaded Model Groups will be accessible in the Python code in the dictionary: od_models
To load a Model named "my_model" and store it in a variable Model use the following statement:
model = od_models["my_model"]
Output
The Python Script Single Input Processor has several output types that can be defined within the script.
Datasets
To pass a dataset as output to ONE DATA, the following method is used:
od_output.add_data("output", dataset)
A new dataset can be created in one of two ways:
2D matrix representation of data (list of rows where each row is a list of column values) and list of column names
from onelogic.odpf.common import ODDataset
from datetime import datetime
dataset = ODDataset([[1, 2.0, "test", datetime.now()],
[2, 3.0, "sample", datetime.now()]],
["int_col", "double_col", "str_col", "timestamp_col"])
from onelogic.odpf.common import ODDataset
from datetime import datetime
from pandas import DataFrame
d = {'int_col': [1, 2], 'double_col': [2.0, 3.0], 'str_col': ['test', 'sample'],
'timestamp_col': [datetime.now(), datetime.now()]}
dataset = ODDataset(DataFrame(data=d))
Current restrictions:
- If content is passed as 2D matrix, column names must be specified and have the same size as each row
- Data types in columns must be of supported type
Models
Like mentioned above in the configuration section, it is also possible to save Python Models to the project from within the script. This can be achieved like this:
od_output.add_model("model_name", model_data)
Note that the "model_name" here has to exactly match the Model name specified in the Processor configuration.
Images
It is also possible to save plots and graphs generated within the script (for example with Pandas) as image to the "Image List" of the Processor. This can be done using the following method:
od_output.add_image(image_name, image_type, image_data)
where
- image_name is the name under which the image will be available in the Processor
- image_type is the type of the created image (either ImageType.PNG or ImageType.JPG)
- image_data is the image itself, either as byte array or a matplotlibs's figure
JWT (JSON Web Token) for Authentication
With ONE DATA Release 3.37.0 the JWT (authentication token) of the executing user (or executing Schedule owner) is now available in Python Processors. This enables the script editor to authenticate against the OD Server API without having to use cleartext credentials.
The JWT can be accessed in the code via the global variable 'od_authorization'. It is already decorated with the necessary "Bearer" prefix, so it can be passed as is to the header 'Authorization' of requests against the OD API. Its basic usage and a specific example are explained at the bottom of the article.
When using Python Processors with the JWT, please note whose token is used.
Workflow - executor of the workflow (owner neglected)
Production Line - executor of the production line (all owners neglected)
Scheduled Workflow - Schedule owner (all other owners/editors/executors neglected)
Scheduled Production Line - Schedule owner (all other owners/editors/executors neglected)
Security Implications
That the different ways a workflow can be executed causes JWTs of different users to be used, has some security implications that should be considered.
JWT of Schedule owner used in Workflow
The owner of a Schedule will be used for the JWT creation. If you own a Schedule running a Workflow, OD API requests can be done in a Python Processor using your authorization token. These actions would be done on your behalf.
JWT of Schedule owner used in all new Workflow/Production Line versions
If your Schedule is configured as always using the "latest" version of a Workflow or Production Line, your JWT is used in all new versions of the Workflow/Production Line. Someone with access to the Workflow can change the behavior of the Python Processor, and someone with access to the Production Line can change the executed Workflow. So someone could print your JWT if he wanted to, and without your knowledge.
JWT can be used to impost other users
If someone has your JWT printed, he is able to impersonate you. He can do anything you can do if you are already logged in. Please note that changing your password is not possible with the JWT alone.
Basic Usage
The following snippet shows how to use the global variable 'od_authorization', which contains the JWT, to authenticate against the OD Server.
# required package for sending requests import requests # create the header for the request using the global variable `od_authorization` to access the JWT headers = {'Authorization': od_authorization} # performing a get request to the `/me` endpoint of the OD Server # note that `onedata-server:8080` has to be used instead of the full domain name (eg. internal.onedata.de) r = requests.get("http://onedata-server:8080/api/v1/users/me", headers=headers) # parse json result and read the username username = r.json()["username"] # print the username print(username)
For an example of how to use this together with the Processor output, take a look at the Python Script Data Generator Processor.
Example
Input
As example input we have a table with three columns, that represent a simple calculation. On the left and right hand side some random numbers, and in the middle an operator.
Script
from onelogic.odpf import ImageType
from onelogic.odpf.common import ODDataset
import pandas as pd
import operator
ops = { "+": operator.add, "-": operator.sub, "*": operator.mul}
# od_input keys represent name of the input dataset set in OD Processor
dataset = od_input['input'].get_as_pandas()
result_values = []
calculations = []
# iterate through rows of dataset and calculate the rsults
for index, row in dataset.iterrows():
number1 = row['ExampleNumbers']
number2 = row['ExampleNumbers2']
operator = row["Operators"]
calculations.append(str(number1) + operator + str(number2))
result_values.append(ops[operator](number1, number2))
# create a pandas dataframe from the result list
result_column = pd.DataFrame({ 'Calculation':calculations, 'Results':result_values })
od_output.add_data("output", result_column)
The Python script uses the operator library to evaluate the operator string, then calculates the result for each row and saves it to a list. This list is then converted to a Pandas DataFrame to pass it as output to ONE DATA
Workflow
Configuration
In this example, the default configuration is used.
Result
The result is a dataset with the calculations and their corresponding results.
Related Articles
Python Script Data Generator Processor