Your Questions, Answered
What is QHarbor?
QHarbor is a data storage and visualisation software platform designed for physics researchers and data driven scientists. It provides a secure, structured repository for measurement data, ensuring compliance with data management best practices. Data is stored and retrieved through an API, which can be deployed both on local servers and on the cloud. Our desktop application and python package provide intuitive tools for uploading, organizing, querying, and visualizing datasets, ultimately enabling faster, more collaborative, and reproducible research.
What issue or need does it address?
We help institutions and researchers to improve the accessibility, organization, persistence and security of their research data. We do this by providing a solution for the following problems:
Fragmented data storage across multiple locations (local folders, network folders, databases)
Data loss risks (hardware failures, inadequate backups, forgotten storage locations)
Difficulty in managing user permissions and sharing data
Data spread in several data formats (Text, CSV, HDF5, Qcodes, Quantify, …)
Limited data searchability and accessibility
Weak link between the raw data and their interpretation
Our mission is to provide a straightforward platform to manage your data with a secure and scalable storage, a common data set format, a python package and a desktop application to browse, visualise, and analyse your data.
How does it work?
The user adds data sources (folder structure or databases) to our synchronization agent which automatically detects existing and new datasets, converts files where possible to netCDF-4/HDF5, and uploads them to the centralized server (for more info see section “How can I upload my data?”).
Data is stored on a centralized server via an API which can be deployed on a local server or on the cloud. The metadata and the files are stored in a database and an object store respectively.
Our DataQruiser desktop app allows researchers to search, organize, create datasets, and to visualize or plot data files (we support interactive visualization for HDF5 files, images, html, text/code, csv, pdf).
Our qdrive python package allows researchers to programmatically search, import, analyze and upload data.
If you use a custom python-based data acquisition software you can integrate further by saving the measurement data directly in QHarbor, point by point, using our data-collector or our sweep functions. In this case step 1 is not necessary.
How is the research data organized?
Our platform structures research data into datasets. Each dataset is assigned a unique identifier (UUID) along with a title, attributes, and tags that make it simple to identify, search, and filter. A dataset can contain multiple files in any format, with each file carrying its own UUID and supporting multiple versions.
Datasets are grouped within scopes, which typically represent long-term research projects. Permissions can be assigned at the scope level to individual users or groups, ensuring secure collaboration.
To add consistency, scopes can also define a schema, which sets the attributes every dataset should include. For example, a schema might require each dataset to contain both a sample and a description. Attributes can be marked as required or optional, restricted to predefined values, or validated using regex patterns. This structured approach ensures that datasets remain well-organized, searchable, and tailored to the needs of each research project.
What type of data are supported?
QHarbor supports a wide range of data types and file formats. It's designed to be independent of specific data acquisition software, so you don't have to change your existing workflows. The platform can automatically convert data files from various data formats, such as Qcodes, Quantify, Labber, HDF5, Text, and CSV into a unified net CDF-4/HDF5 format. This conversion process is also flexible, allowing you to define custom converters for other formats.
For non-data files (e.g. images, text, ..), QHarbor fully supports their storage and retrieval. Visualization support is also offered for certain file types, including HDF5 files, images, HTML, text/code, CSV, and PDF, through the DataQruiser desktop application.
How can I upload my data?
There are several ways to upload your measurement data to QHarbor, depending on your workflow:
Direct upload from Python: Create datasets and add files programmatically, or convert a directory into a dataset.
From existing measurement frameworks: Use our sync-agent with QCoDeS, Labber, Quantify, or core_tools to automatically synchronize new and existing data.
From folders: Add a small info file to each measurement folder, and all files inside are automatically recognized as a dataset and kept up to date.
For advanced usage you can also exchange data via our REST API.
Can I upload all my existing data?
Yes, QHarbor is designed to synchronize your existing data. Its synchronization agent can be connected to your current data sources (such as folder structures or databases). It can also detect and convert data files to a unified format (netCDF-4/HDF5) before uploading them to the centralized server.
Can I run my measurements with QHarbor?
Yes, in Python you can use our sweep functions to run measurements or integrate your own measurement loop with the Measurement class which collects the measurement data points (docs).
What are the deployment options?
QHarbor offers flexible deployment options to suit different needs:
On-premises: This option provides you with complete control over your data's location and network access. You can deploy QHarbor on bare metal servers or with your chosen cloud provider, such as Azure, AWS, or Google Cloud.
Private Cloud: We delpy and maintain the QHarbor API for you in a dedicated private cloud environment that contains only your data. You can select the specific country where the data center is located to comply with data sovereignty requirements.
On-rack: We provide you with a server rack with the QHarbor API installed and ready to operate next to your measurement setup.
Is it secure?
Data security is our number one priority. We have a robust process for keeping your information secure, which includes:
Regular Internal Audits: Our code is regularly tested for vulnerabilities by our internal team.
Third-Party Security Reviews: We hire independent security experts to perform thorough penetration testing of all our API endpoints. Report available on request.
On-premises deployment: For institutions with strict security requirements, QHarbor can be deployed on your own local servers or cloud environment. This ensures that your data remains entirely within your institutional network, behind your firewalls and VPNs, giving you full control over data security and access.
Can I export my data out of QHarbor?
Yes, you can export all your data from QHarbor. The export tool is designed to prevent vendor lock-in by allowing you to retrieve your dataset's metadata and all its associated files using a unique dataset identifier (UUID).
Is it compatible with and can it be integrated with institutional networks?
Yes, QHarbor is designed with institutional integration in mind and offers several key compatibility features:
Authentication and Access Control: QHarbor integrates with institutional identity providers, including EntraID, and other common SSO systems. This means that students can log in using their university authentication system. We also allow importing security groups from EntraID and assign access permissions based on these groups.
Deployment Options: QHarbor supports both on-premises and cloud deployments. On-premises installation provides complete control over data location and network access. QHarbor can be deployed on Azure, AWS, Google cloud and bare metal infrastructure.
Network and Security: Compatible with standard institutional firewall configurations. Data encryption in transit is HTTPS with TLS 1.2+ for all external traffic, SSL/TLS for internal service-to-service communication.
Administration: Administrative control over user accounts and permissions through your existing identity management system. We also provide an admin interface for managing users and groups.
How does it compare to existing solutions?
QHarbor is specifically designed for data-intensive environments like experimental physics where hundreds of measurements can be generated per day. While data management solutions exist in fields like biology and medicine, these typically handle smaller datasets that are manually entered or uploaded. QHarbor addresses the unique challenges of high-throughput measurements where automated data capture and processing are essential for research efficiency.
Here a list of features unique to our software:
Integration with institution Identity provider (e.g. entraID) for SSO login. This means authentication is managed by your institution.
Data-acquisition-software independence, meaning that regardless of the data acquisition software used, the data is converted and stored in a unified format and structure. Importantly, this means that scientists do not need to make any changes to their existing measurement infrastructure or workflow, easing adoption.
Automatic synchronization from data sources: our synchronization agent constantly looks into the data sources and automatically uploads new datasets.
Automatic conversion from several data formats to netCDF-4/HDF5, and flexibility to define custom converters.
Interactive plotting of the data and live-plotting for current measurement (data points are plotted as measurement progresses)
Data are accessible in three ways: via desktop application, python package and https requests to API, to allow from simple browsing to integration in server applications.
Fast, intelligent search and filtering across millions of datasets.
Beyond our software platform, QHarbor offers consultancy services for tailored solutions and dedicated support. Our team consists of experts with scientific backgrounds in experimental physics, enabling us to understand your specific research challenges and provide targeted solutions that integrate well with your existing workflows and institutional requirements.
What does it cost?
Cost depends on deployment type and number of users. Contact us for a quote for the solution that fits your needs.