Creating a Custom Script for Analyzing Parametric Data

Modify the default Jupyter Notebook, Data Space Analysis, to meet your parametric data analysis needs.

Introduced in December 2024

Note For more information on performing a data space analysis using different APIs, refer to the ni_data_space_analyzer Python library in the How_to_use_ni_data_space_analyzer’ document in GitHub.
Before you begin, download the Data Space Analysis notebook, Data_Space_Default_Analysis.ipynb from GitHub.
  1. Navigate to Analysis » Scripts.
  2. Right-click the notebook and rename it.
  3. To include the custom analyses, update the metadata of the notebook parameters . For example, add custom_analysis_scalar and custom_analysis_vector.
    {
      "papermill": {
        "parameters": {
          "trace_data": "",
          "workspace_id": "",
          "analysis_options": []
        }
      },
      "systemlink": {
        "outputs": [
          {
            "display_name": "Custom Analysis Scalar",
            "id": "custom_analysis_scalar",
            "type": "scalar"
          },
          {
            "display_name": "Custom Analysis Vector",
            "id": "custom_analysis_vector",
            "type": "vector"
          }
        ],
        "parameters": [
          {
            "display_name": "Trace Data",
            "id": "trace_data",
            "type": "string"
          },
          {
            "display_name": "Analysis Options",
            "id": "analysis_options",
            "type": "string[]"
          }
        ]
      },
      "tags": ["parameters"]
    }
  4. Update the list of analyses supported by the notebook and specify their output.
    supported_analysis = [
        {"id": "custom_analysis_scalar", "type": "scalar"},
        {"id": "custom_analysis_vector", "type": "vector"}
    ]
    supported_analysis_options = list(map(lambda x: x["id"], supported_analysis))
    
  5. Add functions that compute the custom analysis and add the results to the original data frame.
    def compute_custom_analysis_scalar(dataframe):
        analysis_result = perform_custom_analysis_scalar(dataframe)
        dataframe["custom_analysis_scalar"] = float(analysis_result)
    
    def def compute_custom_analysis_vector(dataframe):
        analysis_result = perform_custom_analysis_vector(dataframe)
        dataframe["custom_analysis_vector"] = list(analysis_result)
  6. Perform an analysis for individual traces.
    def perform_analysis(data_frame: pd.DataFrame) -> pd.DataFrame:
        data_space_analyzer = DataSpaceAnalyzer(dataframe=data_frame)
        for option in analysis_options:
            elif option == "custom_analysis_scalar":
                compute_custom_analysis_scalar(data_frame)
            elif option == "custom_analysis_vector":
                compute_custom_analysis_vector(data_frame)
        return data_space_analyzer.generate_analysis_output(analysis_options=analysis_options,supported_analysis=supported_analysis
  7. Consolidate the results and save those results as an artifact.
    data_space_analyzer = DataSpaceAnalyzer(pd.DataFrame()) 
      final_result = []
      traces = data_space_analyzer.load_dataset(trace_data) 
      for trace in traces: 
        trace_name = trace["name"] 
        trace_values = trace["data"] 
        analysis_results = perform_analysis(trace_values) 
        final_result.append({"plot_label": trace_name, "data": analysis_results}) 
      output_artifact_id = data_space_analyzer.save_analysis(workspace_id, final_result)
  8. Using the scrapbook library, glue the output artifact to the execution result.
    sb.glue("result", output_artifact_id)
    
  9. Save the notebook.
  10. Publish the notebook to SystemLink Enterprise under the Data Space Analysis interface.
After creating your custom script, use it to analyze parametric data in a data space.