- Print
- DarkLight
FAQs
- Print
- DarkLight
Workspace & Pipeline Limits
Q: Is there a limit on the number of pipelines I can have in a single workspace?
A: Currently, there is no hard limit. However, we recommend keeping the number of pipelines within 50 to maintain optimal performance and resource usage.
Q: Why I cannot remove the last pipeline in a workspace?
A: Each workspace is required to have at least one pipeline to remain functional and valid within the system. If you attempt to remove the last pipeline in a workspace, you’d be left with a workspace that has no pipelines to manage or run, which breaks the expected workspace structure. As a result, the system blocks you from removing that final pipeline. If your goal is to remove all pipelines entirely, the recommended approach is to archive or delete the entire workspace instead.
Shared Deployed Pipelines
Q: Does running or pausing a shared deployed pipeline affect all workspaces using it?
A: Yes. Because it is shared, any change to a deployed pipeline’s run/paused status applies to all workspaces referencing that same pipeline.
Q: Do shared deployed pipelines use different resources?
A: No. Shared deployed pipelines are simply references to the same underlying deployed pipeline. Whether you have one or many workspaces referencing that pipeline, it still counts as a single deployed pipeline. Therefore, the resource usage does not multiply with each additional workspace sharing it.
Q: What happens if I remove a pipeline that has a shared deployed pipeline?
A: Removing that pipeline in your workspace does not delete the underlying data as long as at least one other workspace still references the same deployed pipeline. However, if you remove the last shared deployed pipeline, the data is removed as well.
Q: Can I archive a workspace that has a shared deployed pipeline?
A: Yes, you can archive such a workspace. The pipeline data for that workspace will no longer be accessible once archived. However, this action does not affect other workspaces that share the same deployed pipeline.
Pipeline Management
Q: Can multiple pipelines write to the same data table (machine + machine type) or the same column?
A: No. Two or more pipelines cannot write to the exact same table or the same column if they use different data types. This ensures data integrity and prevents schema conflicts.
Q: Why does copying a deployed pipeline in the same workspace only copies the draft pipeline?
A: When you copy a deployed pipeline within the same workspace, the system only duplicates its draft configuration rather than the deployed state. This is to avoid conflicts or invalid states that could arise from duplicating the exact same deployed pipeline in one workspace. Copying the draft lets you preserve the pipeline’s design while ensuring the newly copied pipeline can be validated and deployed separately, preventing clashes with the original deployment’s resources and data.
Q: Are there unlimited resources for preview?
A: No, the Preview does not have unlimited resources. The limit of three task slots set for previews remains in place
Merging Workspaces
Q: How are pipelines identified and merged?
A: When merging two workspaces, the system uses pipeline names (and some additional matching logic) to figure out which pipelines are equivalent. Even if a pipeline’s name has been slightly modified, the system can still recognize it as the same pipeline—provided the change is within a threshold of similarity or includes certain identifiable metadata. This allows the merge process to correctly match and consolidate pipelines, ensuring you don’t end up with unintended duplicates or lost references.
Q: How does merging handle pipelines that have changed differently in two workspaces?
A: During a merge, if both workspaces contain incompatible modifications for the same pipeline, you’ll be prompted to resolve these differences directly in the canvas. You have three options:
Resolve the issue in the merging pipeline before completing the merge.
Finish the merge and then fix the pipeline in the newly merged workspace.
Cancel the merge if the conflicts cannot be resolved or you wish to defer merging.
Q: Do I keep my deployed pipeline (and its data or shared reference) after a merge?
A: Yes—as long as the pipeline itself hasn’t been modified between the two workspaces. If neither workspace introduced changes to the same deployed pipeline, the system merges them seamlessly, preserving the deployment status (and shared reference, if applicable).
Q: Why did I lose my deployed pipeline (and its data or shared reference) after a merge if I did not make any changes?
A: Even if you did not modify that pipeline, there may be a validation error caused by conflicts with one or more pipelines during the merge. If these conflicts exist, the system cannot preserve the existing deployed pipeline as-is.
Archiving Workspaces
Q: Will restoring an archived workspace recover the reference to deployed pipelines?
A: No. Restoring a workspace does not automatically re-deploy any pipelines, nor references for shared deploying will be recreated.
Q: Can I archive any workspace?
A: Not any workspace. You can archive any non-production workspace or group workspace; however, the Production workspace is always exempt from archiving. If you wish to change which workspace is considered Production, you can set a different workspace to Production first, then archive the old one.
Revision History
Q: How does revision history work for multiple pipelines?
A: Each pipeline maintains its own revision history, even if it’s shared. Changes made in one workspace appear in the pipeline’s history, and drafts remain workspace-specific. If you copy or merge workspaces, the revision data is carried over to the new workspace.
Q: How do I handle production data if a shared pipeline is paused for testing in another workspace?
A: Pausing or running a shared deployed pipeline applies to all referencing workspaces. If you need an independent test without disrupting production, copy the pipeline to create a non-shared draft. This way, production data remains unaffected.
Meganode Scaling
Q: Are my existing deployed pipelines using the latest meganode scaling with the new release?
A: No, your existing pipelines still use the old meganode setup. Simply redeploying them will not switch them to the new setup. If you want to take advantage of the new scaling (where each pipeline has its own pod and storage), you’ll need to create or copy your existing pipeline into a new one and redeploy it. Once you move over to the new system, each pipeline will run separately and no longer share resources with other pipelines. This approach helps avoid running out of task slots and makes it easier to manage and remove individual pipelines (and any related costs). After transitioning your pipelines to the new model, you can archive the old ones to free up space and reduce expenses.
Q: How can I track the number of pipelines running on the new meganode scaling?
A: You can check this information in the Dev Panel under Pipeline Resources. There, you’ll see details on how many pipelines are using the dedicated “per-pipeline” resources.
Integrations
Q: Does enabling multiple pipelines or archiving workspaces change the underlying schema for ODBC access?
A: No. The schema remains unaffected, so there’s no impact on ODBC-based queries or integrations.
Q: Is there any impact on existing SDK integrations?
A: No immediate changes are required. Existing integrations should continue to function as before.