Reference Information

This article contains the following sections:


Reference information about the following is available to you:

  • Accessing technical documentation
  • Initializing the SDK Client
  • Setting authentication credentials
  • Using Digital Twins
  • Using Sight Machine data models
  • Generating data visualizations
  • Structuring an analysis for use in for the Sight Machine platform

Accessing Technical Documentation

You can find the complete technical documentation packaged with the SDK. The docstrings can also be accessed within Python. For example:


Initializing the SDK Client

The SDK Client provides methods for authentication and initializing new DigitalTwin and Plot objects.

To initialize the SDK Client:

  1. Run:
  2. cli = sm.Client('<tenantname>', auto_login=False)

    where '<tenantname>' is the name of your Sight Machine subdomain (e.g., ‘demo’ for The Client will be used to inspect configurations, retrieve data, and generate visualizations.

Setting Authentication Credentials

After installing the SDK, you need to set authentication credentials. You only need to authenticate the first time that you use the SDK; the credentials will be stored for future use.

To set authentication credentials:

  1. Log in using the method provided by Sight Machine:
  2. cli.login('basic', email='<>', password='<password>')
    cli.login('apikey', key='<apikey>')

Using Digital Twins

Digital Twins store the configuration for user-configured models such as Machine and Machine Type. Their most common application is to look up information that you can use to refine a query for data, such as the database-friendly name of a particular machine or machine type.

To use Digital Twins:

  1. Run:
  2. dt_mch = cli.get_twin('Machine')
    df_mch_config = dt_mch.fetch_meta(cli.session)
    display(type(df_mch_config), df_mch_config.shape)

Using Sight Machine Data Models

Sight Machine has developed a number of data models for contextualized data. Some, such as Machine Type and Machine, are user-configured models, while others, such as Cycles, Downtimes, and Defects, are generated by the AI Data Pipeline.

For information about interacting with models such as Machine, see Using Digital Twinsabove.

For more information about Sight Machine’s data models, reference these articles:

Retrieving Data

The SDK provides a simple interface for downloading data from models such as Cycles, Downtimes, and Defects.

To retrieve data:

  1. Generate a query to limit the data returned.
    The Sight Machine SDK supports a PyMongo-like query syntax. See the PyMongo Tutorial for examples. One notable difference is that the Sight Machine API does not support logical OR.
  2. You may wish to explore Digital Twins before generating a query. Get the Digital Twin for the machine that you are interested in gathering data from:
  3. MACHINE_TYPE = 'Lasercut'
    dt_lc = cli.get_twin('Machine', MACHINE_TYPE)
  4. Assemble the query:
  5. DATE_START = datetime(2017, 8, 6)
    DATE_END   = datetime(2017, 8, 7)
    QUERY = {
        'endtime' : {'$gte' : DATE_START, '$lt' : DATE_END},
        'machine.source_type' : MACHINE_TYPE
  6. Use the query to fetch data. The data is returned as a pandas dataframe. The same function can be applied to any data model:
  7. df_lc_c = dt_lc.fetch_data(cli.session, 'cycle', QUERY, normalize=True)
    df_lc_dt = dt_lc.fetch_data(cli.session, 'downtime', QUERY, normalize=True)
    df_lc_def = dt_lc.fetch_data(cli.session, 'defect', QUERY, normalize=True)
  8. You can now export the data, run an exploratory analysis, train a model, blend in data from other sources, or otherwise manipulate the data.

Generating Data Visualizations

You can generate basic visualizations, set chart titles, add overlays, and define panels using the SDK. Python also supports other visualization libraries.

Generating Basic Visualizations

The SDK provides simple methods for generating basic visualizations.

Visualization How to Generate
Scatter and Line Plot To generate a scatter or line plot, the dataframe columns must be passed in with the ordering [x, y, z], where each value of z is displayed as a separate trace with a distinct color.
df_tmp = df_cycle[['temperature', 'pressure', 'machine']]

plt1 = cli.get_plot('scatter', df_tmp)		

plt2 = cli.get_plot('line', df_tmp)
Bar Plot To generate a bar plot, the dataframe columns must be passed in with the ordering [x, y, z], where each value of z is displayed as a separate trace with a distinct color. This grouping step illustrates one way of aggregating data.         
df_tmp = df_cycle[[model_number, 'pressure', 'machine']]   
         ].groupby([model_number', 'machine']
         ).mean().reset_index()[['model_number', 'pressure', 'machine.source']]
plt = cli.get_plot('bar', df_tmp)		
Box Plot To generate a box plot, the dataframe columns must be passed in with the ordering [x, y, z], where each value of z is displayed as a separate trace with a distinct color.

df_tmp = df_cycle[[‘model_number’, 'pressure', 'machine']]

plt = cli.get_plot('box', df_tmp)
Histogram To generate a histogram, the dataframe columns must be passed in with the ordering [x, z], where each value of z is displayed as a separate trace with a distinct color.
df_tmp = df_cycle[['pressure', 'machine']]

plt = cli.get_plot('histogram', df_tmp)
Pareto To generate a pareto, the dataframe columns must be passed in with the ordering [x, y, z], where each value of z is displayed as a separate trace with a distinct color. This grouping step illustrates one way of aggregating data.
df_tmp = df_defect[[‘defect’, ‘quantity’, 'machine']] 
		).sum().sort_values('quantity', ascending=False).reset_index()
df_tmp = df_tmp[[‘defect’, ‘quantity’, 'machine']]
plt = cli.get_plot('pareto', df_tmp)
Heatmap To generate a heatmap, the dataframe columns must be passed in with the ordering [x, y, z], where x and y are the labels and z is the value of each cell. This correlation step illustrates one common way of preparing correlation data for use in a heatmap.


df_tmp = df_cycle[[  

df_corr = df_tmp.corr().unstack().to_frame().reset_index()
plt = sm.Client.get_plot('heatmap', df_corr)

Using Other Styling Methods

You can add a chart title.

To set the title of your chart:

  1. Run:
  2. plt = cli.get_plot('line', df_tmp)
    plt.set_title('Temperature over Time')

Adding Overlays

The SDK offers support for adding overlays to plots using a Plotly feature called “shapes.” At present, the SDK provides methods for adding lines, rectangles, and circles/ovals.

The basic information needed to generate a shape include its location, form, and style. These attributes of the shape must be stored as columns in a dataframe, which the SDK will interpret.

To add overlays:

  1. Generate the x and y coordinates of the shape’s bounding box from your data. This example overlay is designed to be applied to a graph of temperature over time; select the appropriate data for your own chart.
  2. df_rect = df_cycle[['starttime','endtime']].copy(deep=True)
    df_rect = df_rect.rename(columns={'starttime': 'x0', 'endtime': 'x1'})
    df_rect.loc[:, 'x0'] = pd.to_datetime(
    df_rect['x0'])df_rect.loc[:, 'x1'] = pd.to_datetime(
    df_rect.loc[:, 'y0'] = 0df_rect.loc[:, 'y1'] = 1
  3. Set the form of the shape.
  4. df_rect.loc[:, 'xref'] = 'x'
    df_rect.loc[:, 'yref'] = 'paper'
    df_rect.loc[:, 'type'] = 'rect'
  5. If desired, set styling for the shape. This is optional; a default styling will be applied to any options that are not specified. Note that Plotly expects some style and layout options to be nested. To set these, insert a dictionary into each cell that contains the appropriate levels of nesting. The dataframe is treated as the outside level.
  6. df_rect.loc[:, 'fillcolor'] = '#134936'
    df_rect.loc[:, 'opacity'] = 0.15
    df_rect.reset_index(drop=True, inplace=True)
    df_rect['line'] = pd.Series([{ 'width': 0 } for i in range(len(df_rect.index))])
  7. After the shape definition dataframe is complete, apply it to your plot.
  8. df_tmp = df_cycle[['endtime', 'temperature', 'shift']]
    plt = cli.get_plot('line', df_tmp)
    plt.add_overlay(sm.plot.Shape(df_rect, 'df_rect'))

For more information about Plotly shapes and the available options, see:

Generating Code and Customizing Plots

The SDK supports basic visualizations with Sight Machine styling. Advanced users can also customize the styling or other features of the plots.

The SDK generates the Python code used to make each plot. You can edit and run this code independently.

To generate the code and customize plots:

  1. Run:
  2. sys.stdout.write(str(plt6.generate_code()))sys.stdout.flush()
  3. Copy the output into your environment (a new file, a Jupyter notebook cell, etc.) and edit as desired.

For more details, consult the Plotly reference:

Defining Panels

Coming soon, the SDK will support defining panel layouts to structure multiple visualizations in one interface. You can apply these panels to analytics in the Sight Machine platform. Contact your Sight Machine Engagement Team for more details.

Structuring an Analysis for Use in the Sight Machine Platform

Because the SDK is an extension of the Sight Machine platform, you can turn analyses developed in the SDK into repeatable analytics that live inside the platform behind a user interface, analogous to the Data Discovery Tools.

Conveniently, the SDK retrieves data from the platform in the same format that platform analytics use, and the visualization tools are powered by Plotly, just as custom analyses are.

The following outlines how to develop an analysis, using the SDK, in a way that will be straightforward to translate into a platform dashboard. For details, contact your Sight Machine Engagement Team.

Consider the following questions in conjunction with the Sight Machine Engagement Team before beginning to develop your analysis:

  • What options will the user set before running the analysis (e.g., selecting a date range and an asset)?
  • What real-world problem does the analysis address?
  • Who is the audience of the analysis, and how can the results be presented in a way that makes them actionable for this audience?

A platform analytic contains three major sections:

  1. Retrieve data to analyze.
  2. Analyze the data.
  3. Format the output into a user-friendly, actionable visualization.

At present, the majority of sections 1 and 3 are completed by the Sight Machine team, and section 2 is provided by the customer.

The analysis should be split into a separate file that the platform can call, and organized into modular functions. This supports two convenient features:

  • Unit tests and computation checks can be written for the analysis.
  • Any updates can be made smoothly by replacing the entire file.

The skeleton will look something like the following. Consult with your Sight Machine Engagement Team on specifics.

Provided By Code
Sight Machine
from customer_analysis import analyze_feature
<code for retrieving data>
result = analyze_feature(data)
<code for generating visualization>
def analyze_feature(data):
    <code for analysis>
    return result