Creating a Custom Script for Analyzing Parametric Data
- Updated2025-04-25
- 2 minute(s) read
Creating a Custom Script for Analyzing Parametric Data
Modify the default Jupyter Notebook, Data Space Analysis, to meet your parametric data analysis needs.
Introduced in December 2024
Note For more information on performing a data space analysis
using different APIs, refer to the ni_data_space_analyzer Python
library in the How_to_use_ni_data_space_analyzer’ document in
GitHub.
- Navigate to .
- Right-click the notebook and rename it.
-
To include the custom analyses, update the metadata of the notebook parameters
. For example, add custom_analysis_scalar and
custom_analysis_vector.
{ "papermill": { "parameters": { "trace_data": "", "workspace_id": "", "analysis_options": [] } }, "systemlink": { "outputs": [ { "display_name": "Custom Analysis Scalar", "id": "custom_analysis_scalar", "type": "scalar" }, { "display_name": "Custom Analysis Vector", "id": "custom_analysis_vector", "type": "vector" } ], "parameters": [ { "display_name": "Trace Data", "id": "trace_data", "type": "string" }, { "display_name": "Analysis Options", "id": "analysis_options", "type": "string[]" } ] }, "tags": ["parameters"] }
-
Update the list of analyses supported by the notebook and specify their
output.
supported_analysis = [ {"id": "custom_analysis_scalar", "type": "scalar"}, {"id": "custom_analysis_vector", "type": "vector"} ] supported_analysis_options = list(map(lambda x: x["id"], supported_analysis))
-
Add functions that compute the custom analysis and add the results to the
original data frame.
def compute_custom_analysis_scalar(dataframe): analysis_result = perform_custom_analysis_scalar(dataframe) dataframe["custom_analysis_scalar"] = float(analysis_result) def def compute_custom_analysis_vector(dataframe): analysis_result = perform_custom_analysis_vector(dataframe) dataframe["custom_analysis_vector"] = list(analysis_result)
-
Perform an analysis for individual traces.
def perform_analysis(data_frame: pd.DataFrame) -> pd.DataFrame: data_space_analyzer = DataSpaceAnalyzer(dataframe=data_frame) for option in analysis_options: elif option == "custom_analysis_scalar": compute_custom_analysis_scalar(data_frame) elif option == "custom_analysis_vector": compute_custom_analysis_vector(data_frame) return data_space_analyzer.generate_analysis_output(analysis_options=analysis_options,supported_analysis=supported_analysis
-
Consolidate the results and save those results as an artifact.
data_space_analyzer = DataSpaceAnalyzer(pd.DataFrame()) final_result = [] traces = data_space_analyzer.load_dataset(trace_data) for trace in traces: trace_name = trace["name"] trace_values = trace["data"] analysis_results = perform_analysis(trace_values) final_result.append({"plot_label": trace_name, "data": analysis_results}) output_artifact_id = data_space_analyzer.save_analysis(workspace_id, final_result)
-
Using the scrapbook library, glue the output artifact to the
execution result.
sb.glue("result", output_artifact_id)
- Save the notebook.
- Publish the notebook to SystemLink Enterprise under the Data Space Analysis interface.
Related Information
- Data Space Analysis Notebook (GitHub)
- Analyzing Parametric Data in a Data Space
Derive insights about your parametric data in a data space.
- Publishing a Jupyter Notebook to SystemLink Enterprise
Publish a Jupyter Notebook (.ipynb) to SystemLink Enterprise so you can use the notebook for a data analysis and for visualizations.