Article Content

  1. Motivation
  2. Overview
  3. Input
  4. Configuration
    1. Timeout for Script Execution
    2. Generate Empty Dataset Output
    3. Manual TI
    4. Load One or More Models
    5. Save One or More Models
  5. Output
    1. Datasets
    2. Models
    3. Images
  6. Example
    1. Input
    2. Script 
    3. Workflow
    4. Configuration
    5. Result
  7. Related Articles


Motivation

With ONE DATA there are many ways to process data within a Workflow using different Processors. But sometimes, it is necessary or easier to use custom computation methods. A Python script, for instance, is a good way to customize the processing of data, due to the fact that Python is a highly flexible programming language with many open source libraries.

Since Python scripts are a very useful tool, it is possible to include them in Workflows with the ONE DATA Python Processors. In this article, the one that is focused on is the Python Script Single Input Processor.


Overview

The Python Script Single Input Processor takes a given input and executes the specified script on it. Then the processed data can be passed as output to other Processors. There are already some useful libraries included that could be necessary to write scripts for data science tasks. The info button on the left corner of the input box provides further information on what packages are preinstalled.


To interact with ONE DATA resources from within the Python script, for example loading Models, accessing Variables or specifying the output of the Processor, it is necessary to use the ONE DATA Python Framework.


For advanced usage of the ONE DAT Python Processors, it can be really helpful to have a deeper look into the framework. This article gives a small insight to it, but does not explain the framework in depth.


Input

The Processor takes any valid dataset as input. The Python script which can be used to process the input data can be inserted within the configuration:


In the "Input Name In Script" textfield, it is possible to specify the name of the input dataset in the Python script. The default value is "input"

Then it can be saved into a variable, for example as Pandas Dataframe, like this:

dataset = od_input['input'].get_as_pandas()


or as 2D matrix like this:

matrix = od_input['input'].get_as_matrix()


Configuration

The Processor configuration gives some additional options on how the output data should be processed and some definitions for the script execution itself. It is also possible to load ONE DATA  Python Models and use them in the script. All options are described more in-depth in the following sections.


Timeout for Script Execution

This is the time in seconds that ONE DATA waits for the script execution and the returning of the results of the script. The time starts when the Processor submits the Python script and the data to the Python Service of ONE DATA. If the timeout is exceeds, the calculation will be interrupted.

The default value is 300 seconds.


Generate Empty Dataset Output

This option defines if the Processor should generate an empty dataset after the execution of the script. This can be very useful, for example when the script is only used to generate a plot and no respective result dataset. This option can then be activated to prevent a Processor execution error, because by default, it requires an output dataset.


Manual TI

With this configuration option it is possible to specify what scale and representation type the columns of the output dataset have in order to provide the correct type inference in ONE DATA.

Possible scale types: nominal, interval, ordinal, ratio. Further information on scale types can be found here.

Possible representation types: string, int, double, datetime, numeric

If it is not possible to convert the values of a column to the specified representation type, the Processor will take the type that fits best for their representation. If types still do not fit the purpose, it is recommended to use the Data type Conversion Processor.


Load One or More Models


The first dropdown is used to select an existing Python Model from the current project. Its also possible to specify which version of the Model should be loaded. 

With the "Open Model" button the view for the selected Model can be accessed directly from the Processor.

With the "Add Group" button multiple Models can be loaded.

To use it in the script itself, a selected Model can be stored in a variable like so:

model = od_models["model_name"]


Note that, a Model needs to have a unique name within a Domain in ONE DATA.


Save One or More Models

This configuration option is used to save a generated Python Model to the project, or adds a new version to a existing one.

It has three options:

  • Create New Model: Creates a new Model, with the name specified in the textbox below. The name needs to be unique within a Domain.
  • Add New Model Version: Adds a new version to an already existing Model which can be selected below.
  • Create Or Add Version: With this option the Processor either adds a new version to the given Model, or creates a new one if the Model does not exist yet.

A Model can be saved using following script

od_output.add_model("my_model", model)


Save One or More Model Groups With Assigned Models


With this option, you can save a Model Group created by Python within ONE DATA. All Model Groups added in the Python script must be configured in here, otherwise they will not be saved to the ONE DATA environment. To save a Model assigned to a Model Group stored in a variable Model under the name "my_model" and the Model Group name \"my_model_group\" use the following statement in the script:

od_output.add_model("my_model", model, "my_model_group")


Note that, a Model Group needs to have a unique name within a Domain in ONE DATA.

Load One or More Model Groups

By using this option, it is possible to load Model Groups for Python execution. Models of all loaded Model Groups will be accessible in the Python code in the dictionary: od_models

To load a Model named "my_model" and store it in a variable Model use the following statement:

model = od_models["my_model"]


Output

The Python Script Single Input Processor has several output types that can be defined within the script.


Datasets

To pass a dataset as output to ONE DATA, the following method is used: 

od_output.add_data("output", dataset)


A new dataset can be created in one of two ways:

  • 2D matrix representation of data ( list of rows where each row is a list of column values) and list of column names

    from onelogic.odpf.common import ODDataset
    from datetime import datetime
    
    dataset = ODDataset([[1, 2.0, "test", datetime.now()],
                         [2, 3.0, "sample", datetime.now()]],
                        ["int_col", "double_col", "str_col", "timestamp_col"])
  • Pandas DataFrame

    from onelogic.odpf.common import ODDataset
    from datetime import datetime
    from pandas import DataFrame
    
    d = {'int_col': [1, 2], 'double_col': [2.0, 3.0], 'str_col': ['test', 'sample'], 
         'timestamp_col': [datetime.now(), datetime.now()]}
    dataset = ODDataset(DataFrame(data=d))


Current restrictions:

  • If content is passed as 2D matrix, column names must be specified and have the same size as each row
  • Data types in columns must be of supported type


Models

Like mentioned above in the configuration section, it is also possible to save Python Models to the project from within the script. This can be achieved like this:

od_output.add_model("model_name", model_data)


Note that the "model_name" here has to exactly match the Model name specified in the Processor configuration.


Images

It is also possible to save plots and graphs generated within the script (for example with Pandas) as image to the "Image List" of the Processor. This can be done using the following method:

od_output.add_image(image_name, image_type, image_data)

where

  • image_name is the name under which the image will be available in the Processor
  • image_type is the type of the created image (either ImageType.PNG or ImageType.JPG)
  • image_data is the image itself, either as byte array or a matplotlibs's figure


Example

Input

As example input we have a table with three columns, that represent a simple calculation. On the left and right hand side some random numbers, and in the middle an operator.


Script 

from onelogic.odpf import ImageType
from onelogic.odpf.common import ODDataset
import pandas as pd
import operator

ops = { "+": operator.add, "-": operator.sub, "*": operator.mul}

# od_input keys represent name of the input dataset set in OD Processor
dataset = od_input['input'].get_as_pandas()
result_values = []
calculations = []

# iterate through rows of dataset and calculate the rsults
for index, row in dataset.iterrows():
    number1 = row['ExampleNumbers']
    number2 = row['ExampleNumbers2']
    operator = row["Operators"]
    calculations.append(str(number1) + operator + str(number2))
    result_values.append(ops[operator](number1, number2))
    
# create a pandas dataframe from the result list  
result_column = pd.DataFrame({ 'Calculation':calculations, 'Results':result_values })

od_output.add_data("output", result_column)


The Python script uses the operator library to evaluate the operator string, then calculates the result for each row and saves it to a list. This list is then converted to a Pandas DataFrame to pass it as output to ONE DATA


Workflow


Configuration

In this example, the default configuration is used.



Result

The result is a dataset with the calculations and their corresponding results.



Related Articles

Hands-On: Python Processors

Python Script Data Generator Processor

Python Script Dual Input Processor

ONE DATA Python Framework