修改默认的Jupyter Notebook - Data Space Analysis,用于满足参数数据分析需求。

2024年12月推出

备注 关于使用不同API进行数据空间分析的更多详情,请参见GitHubHow_to_use_ni_data_space_analyzer文档中的ni_data_space_analyzerPython库。
开始之前,从GitHub下载Data Space Analysis Notebook (Data_Space_Default_Analysis.ipynb)。
  1. 导航至分析(Analysis) » 脚本(Scripts)
  2. 右键单击Notebook对其重命名。
  3. 要包含自定义分析,请更新Notebook参数的元数据。例如,添加custom_analysis_scalarcustom_analysis_vector
    {
      "papermill": {
        "parameters": {
          "trace_data": "",
          "workspace_id": "",
          "analysis_options": []
        }
      },
      "systemlink": {
        "outputs": [
          {
            "display_name": "Custom Analysis Scalar",
            "id": "custom_analysis_scalar",
            "type": "scalar"
          },
          {
            "display_name": "Custom Analysis Vector",
            "id": "custom_analysis_vector",
            "type": "vector"
          }
        ],
        "parameters": [
          {
            "display_name": "Trace Data",
            "id": "trace_data",
            "type": "string"
          },
          {
            "display_name": "Analysis Options",
            "id": "analysis_options",
            "type": "string[]"
          }
        ]
      },
      "tags": ["parameters"]
    }
  4. 更新Notebook支持的分析列表并指定其输出。
    supported_analysis = [
        {"id": "custom_analysis_scalar", "type": "scalar"},
        {"id": "custom_analysis_vector", "type": "vector"}
    ]
    supported_analysis_options = list(map(lambda x: x["id"], supported_analysis))
    
  5. 添加计算自定义分析的函数,并将结果添加至原始数据帧。
    def compute_custom_analysis_scalar(dataframe):
        analysis_result = perform_custom_analysis_scalar(dataframe)
        dataframe["custom_analysis_scalar"] = float(analysis_result)
    
    def def compute_custom_analysis_vector(dataframe):
        analysis_result = perform_custom_analysis_vector(dataframe)
        dataframe["custom_analysis_vector"] = list(analysis_result)
  6. 对个别迹进行分析。
    def perform_analysis(data_frame: pd.DataFrame) -> pd.DataFrame:
        data_space_analyzer = DataSpaceAnalyzer(dataframe=data_frame)
        for option in analysis_options:
            elif option == "custom_analysis_scalar":
                compute_custom_analysis_scalar(data_frame)
            elif option == "custom_analysis_vector":
                compute_custom_analysis_vector(data_frame)
        return data_space_analyzer.generate_analysis_output(analysis_options=analysis_options,supported_analysis=supported_analysis
  7. 合并结果并将结果保存为工件。
    data_space_analyzer = DataSpaceAnalyzer(pd.DataFrame()) 
      final_result = []
      traces = data_space_analyzer.load_dataset(trace_data) 
      for trace in traces: 
        trace_name = trace["name"] 
        trace_values = trace["data"] 
        analysis_results = perform_analysis(trace_values) 
        final_result.append({"plot_label": trace_name, "data": analysis_results}) 
      output_artifact_id = data_space_analyzer.save_analysis(workspace_id, final_result)
  8. 使用Scrapbook库,将输出工件粘贴至执行结果。
    sb.glue("result", output_artifact_id)
    
  9. 保存该Notebook。
  10. 将Notebook发布至SystemLink Enterprise的Data Space Analysis界面下。
创建自定义脚本后,可用于在数据空间中分析参数数据。