Load Micro Focus Service Manager data in Python using dltHub

Build a Micro Focus Service Manager-to-database or-dataframe pipeline in Python using dlt with automatic Cursor support.

In this guide, we'll set up a complete Micro Focus Service Manager data pipeline from API credentials to your first data load in just 10 minutes. You'll end up with a fully declarative Python pipeline based on dlt's REST API connector, like in the partial example code below:

Example code
@dlt.source def micro_focus_service_manager_source(access_token=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "http://localhost:13080/SM/9/rest/incidents", "auth": { "type": "bearer", "token": access_token, } }, "resources": [ "IM10001", "IM10003" ], } [...] yield from rest_api_resources(config) def get_data() -> None: # Connect to destination pipeline = dlt.pipeline( pipeline_name='micro_focus_service_manager_pipeline', destination='duckdb', dataset_name='micro_focus_service_manager_data', ) # Load the data load_info = pipeline.run(micro_focus_service_manager_source()) print(load_info)

Why use dltHub Workspace with LLM Context to generate Python pipelines?

  • Accelerate pipeline development with AI-native context
  • Debug pipelines, validate schemas and data with the integrated Pipeline Dashboard
  • Build Python notebooks for end users of your data
  • Low maintenance thanks to Schema evolution with type inference, resilience and self documenting REST API connectors. A shallow learning curve makes the pipeline easy to extend by any team member
  • dlt is the tool of choice for Pythonic Iceberg Lakehouses, bringing mature data loading to pythonic Iceberg with or without catalogs

What you’ll do

We’ll show you how to generate a readable and easily maintainable Python script that fetches data from micro_focus_service_manager’s API and loads it into Iceberg, DataFrames, files, or a database of your choice. Here are some of the endpoints you can load:

  • Incident Management Endpoints: These endpoints are related to the management of incidents, including creating, retrieving, and attaching files to incidents.

    • /SM/9/rest/incidents/{number}: Retrieve details of a specific incident by its number.
    • /SM/9/rest/incidents: Access or manage a list of incidents.
    • /SM/9/rest/incidents/IM10003/attachments: Manage attachments for a specific incident (IM10003).
    • /SM/9/rest/incidents/IM10001/attachments/58ac7b3200242211802b29a8: Access a specific attachment for a given incident (IM10001).
  • WSDL Endpoints: These endpoints provide the Web Services Description Language (WSDL) files for incident management.

    • /sc62server/PWS/IncidentManagement.wsdl: WSDL file for the incident management service.
    • /SM/7/IncidentManagement.wsdl: WSDL file for an earlier version of the incident management service.

You will then debug the Micro Focus Service Manager pipeline using our Pipeline Dashboard tool to ensure it is copying the data correctly, before building a Notebook to explore your data and build reports.

Setup & steps to follow

💡

Before getting started, let's make sure Cursor is set up correctly:

Now you're ready to get started!

  1. ⚙️ Set up dlt Workspace

    Install dlt with duckdb support:

    pip install "dlt[workspace]"

    Initialize a dlt pipeline with Micro Focus Service Manager support.

    dlt init dlthub:micro_focus_service_manager duckdb

    The init command will setup the necessary files and folders for the next step.

  2. 🤠 Start LLM-assisted coding

    Here’s a prompt to get you started:

    Prompt
    Please generate a REST API Source for Micro Focus Service Manager API, as specified in @micro_focus_service_manager-docs.yaml Start with endpoints IM10001 and IM10003 and skip incremental loading for now. Place the code in micro_focus_service_manager_pipeline.py and name the pipeline micro_focus_service_manager_pipeline. If the file exists, use it as a starting point. Do not add or modify any other files. Use @dlt rest api as a tutorial. After adding the endpoints, allow the user to run the pipeline with python micro_focus_service_manager_pipeline.py and await further instructions.
  3. 🔒 Set up credentials

    The snippets indicate that the Service Manager supports client authentication via certificate with two-way SSL, and that administrators can set up SSL, TSO, and LW-SSO for different authentication methods, but no specific details on keys, tokens, or other authentication parameters are provided.

    To get the appropriate API keys, please visit the original source at https://docs.microfocus.com/SM/9.51/Hybrid/Content/webservicesguide/reference/attachment_operations_using_REST_API.htm. If you want to protect your environment secrets in a production environment, look into setting up credentials with dlt.

  4. 🏃‍♀️ Run the pipeline in the Python terminal in Cursor

    python micro_focus_service_manager_pipeline.py

    If your pipeline runs correctly, you’ll see something like the following:

    Pipeline micro_focus_service_manager load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset micro_focus_service_manager_data The duckdb destination used duckdb:/micro_focus_service_manager.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
  5. 📈 Debug your pipeline and data with the Pipeline Dashboard

    Now that you have a running pipeline, you need to make sure it’s correct, so you do not introduce silent failures like misconfigured pagination or incremental loading errors. By launching the dlt Workspace Pipeline Dashboard, you can see various information about the pipeline to enable you to test it. Here you can see:

    • Pipeline overview: State, load metrics
    • Data’s schema: tables, columns, types, hints
    • You can query the data itself
    dlt pipeline micro_focus_service_manager_pipeline show
  6. 🐍 Build a Notebook with data explorations and reports

    With the pipeline and data partially validated, you can continue with custom data explorations and reports. To get started, paste the snippet below into a new marimo Notebook and ask your LLM to go from there. Jupyter Notebooks and regular Python scripts are supported as well.

    import dlt data = dlt.pipeline("micro_focus_service_manager_pipeline").dataset() # get "IM10001" table as Pandas frame data."IM10001".df().head()

Extra resources:

Next steps