Building User Defined Functions(UDF)
This section walks the user through the steps to define, implement, and integrating a User-Defined Function (UDF) to process blockchain data, extract specific information, and store it in a PostgreSQL database.
We'll be following 4 primary steps:
Setting up input and output data classes
Defining database models
Implementing job logic
Integrating the job into Hemera Indexer
Prerequisites
Before beginning, set up the development environment by following Hemera’s setup guide for Docker or from source.
Step 1: Define the Input and Output Data Classes
Input Data Class
The input data class represents the structure of the data being processed by your job. For blockchain-based UDFs, this might be a Transaction or Log object representing blockchain transactions or events.
Output Data Class
The output data class represents the processed data that will be stored in the database. This class should capture relevant fields from the input data and convert them into a format suitable for saving to the database.
This structure ensures that you are working with typed data and can easily process it within your UDF logic.
Step 2: Define the Database Model
The model maps the processed data to the database table schema. You will use SQLAlchemy to define your model, which ensures that your data class output is correctly persisted in the PostgreSQL database.
Make sure that the fields defined in the model correspond to those in the data class to ensure proper mapping of the processed data.
Step 3: Implement the UDF Job
The job logic is where your UDF processes the input data and produces output data. The job needs to:
Define dependencies: Specify which data classes you depend on, such as
TransactionorLog.Process transactions: Implement the logic to filter, extract, and transform data from the input.
Define
get_filter()to Specify Which Blockchain Events to Process. Filter based on criteria like addresses and topics to limit which blockchain events are processed.Core Logic in
_process()Function:Iterate through each transaction.
Apply filtering logic to select specific transactions, such as high-value transactions.
Transform data from the transaction and map it to the fields in the output data class.
Save to database: Convert the output into a format that matches your model and store it in the database.
Step 4: Integrate Your UDF Job into Hemera Indexer
After defining your job, integrate it into Hemera Indexer so it can run as part of the indexing process. You may need to:
Register the job: Place your job file in the appropriate directory within the Hemera Indexer project (e.g.,
hemera/jobs/custom/).Update configuration: Ensure that your job is included in the Hemera Indexer job registry or configuration files so that it is executed during indexing.
Once updated, run the hemera indexer. For more detailed steps and deployment options, refer to the Testing and Running UDFsection.
Last updated