Find help documents, tutorials, and/or relevant information.

Frequently Asked Questions

A list of often asked questions (FAQ) about the SPARC resources.
Updated at: 08/30/2021


What does SPARC stand for and what is it about?

SPARC stands for “Stimulating Peripheral Activity to Relieve Conditions” and our goal is to develop medical technologies, specifically around electrical stimulation to treat a variety of illnesses. Examples of such existing devices are vagus nerve stimulators for seizure reduction or sacral nerve stimulation for bladder control. For more detailed information about our program, see here.

What is a SPARC data set?

A SPARC data set is a collection of data files, supporting documents, and metadata produced by a SPARC investigator. According to the SPARC data sharing policy, a data set should include any data or supporting materials that the PI deems necessary for a 3rd party to reuse the data and reproduce or replicate the results. In general, such files may include raw (primary) data, experimental protocols, analysis code/workflows, processed (derivative) data, complete results, and textual descriptions of the datasets and their contents. At time of submission by a PI, the dataset may be considered complete (that is, no new data will be collected), or it may represent a batch or slice of data that is part of a larger dataset being collected over several milestones.

What is the SPARC Material Sharing Policy?

The guidelines and polices to which SPARC OT awardees must adhere as a member of the SPARC consortium are provided in the SPARC Material Sharing Policy document.

Why do some datasets have the option to "Run Simulation" and why does that option bring me to a different website?

Some datasets are computational models (code-based content), and these can be executed on SPARC's simulation web application, o²S²PARC. When you click on "Run Simulation" on these computational datasets, you will be guided to o²S²PARC to execute the model or analysis pipeline. An o²S²PARC account is optional for executing these datasets from the SPARC Portal, but having an account will give you access to much greater functionality. To learn more about o²S²PARC, take a look here and at its extensive documentation.

What role does this imaging standard play in SPARC?

The SPARC Standards for Optical Microscopy Imaging Data and Imaging Metadata ensures a consistent set of metadata specific to microscopy image data across the SPARC ecosystem. Microscopy image data is a common experimental modality, and is inherently diverse. Variability in the experimental sample preparation as well as microscopy settings, methods, and equipment all contribute to the complexity. This standard, in conjunction with the SPARC Dataset Structure metadata standard, guarantees that the microscopy data hosted on the SPARC Portal can be understood and repurposed despite its complex and experiment-specific nature.

Why is microscopy imaging metadata important to SPARC?

Without metadata such as the histological stain or fluorescent label of the channel(s) or the physical dimensionality of the pixel or voxel, it is impossible for one to fully understand, analyze, or reuse an image from a given dataset. Previously, metadata would be included separate from the image file, and MicroFile+ addresses this challenge by enriching image files with essential metadata directly. This ensures that the files include relevant and critical information about the image itself and how it was acquired, such as writing channel information in the header.

How can SPARC researchers ensure their datasets meet requirements of the imaging standard?

The recommended best practice is to configure the microscope acquisition software to include metadata information, and to use this native format whenever possible for image analyses. However, when image data is saved in another form, such as going from the Zeiss standard (.CZI) format to an open format such as TIFF, often nearly all essential metadata is lost from the file. When this occurs as a part of the experimental analysis pipeline, a final step to re-create the metadata is required. If this information is not a part of the minimum information available, then the file is ultimately of limited value.

In an effort to address this issue and make it easier for investigators to adhere to the SPARC imaging standard, a freely available image file converter application called Microfile+ has been created.

What is MicroFile+?

MicroFile+ software is a free, powerful image file converter that helps address the challenges of organizing and storing big data from modern laboratory imaging devices and retaining essential metadata. Image files from many sources can be converted into JPEG2000 and/or OME TIFF formats that allow for efficient storage, viewing, and analysis. MicroFile+ also allows for adding and editing essential metadata to ensure SPARC image data is FAIR (Findable, Accessible, Interoperable, and Reusable). State-of-the-art compression methods are also available to convert images into a manageable file size.

What metadata will MicroFile+ write and store?

The imaging standard defines 23 required metadata fields from a total of 48 commonly written metadata fields based on the OME-TIFF specification.

MicroFile+ software will write and store many different metadata fields, including, but not limited to: device, camera, objective, and detector information. Among the required fields for SPARC datasets include: modality and target label for each channel, pixel size, and objective magnification. Instead of providing a sidecar file filled with this essential metadata that could be lost or separated from the source image, MicroFile+ allows for the immediate addition or revision of imaging metadata, and writes that metadata to the file header of OME-

What is the difference between the SPARC public data repository and the SPARC Portal?

Detailed information is provided in the SPARC Portal Data Repository Structure document.

For SPARC Investigators only

Which user accounts are available to SPARC project team members?

The SPARC program offers accounts in the SPARC Slack workplace, Airtable, o²S²PARC,, and MBF Bioscience segmentation software. Please visit User Accounts & Organizations for details.

What is the difference between the SPARC data model and metadata on the DAT-Core?

The Data Standards Committee and MAP-Core have created a common data model to cover SPARC data. The UCSD curation team is curating submitted data according to that data model, which is subsequently ingested as part of the dataset on the Pennsieve Platform (DAT-Core). PI’s are free to add their own meta data to their datasets utilizing the tools available in the Pennsieve Platform to organize their data for their own purposes, but they do not need to specifically for the SPARC data model. It is however important to correctly include the CSV files for the BIDS Structure.

Is the readme file necessary?

Yes, please create the file and keep it in your main folder; it may be a placeholder, i.e., a blank text file; however, if there are notes that you have about the experiment that do not fit please put them here.

Where can I find help on uploading my protocol to

We’ve provided a brief tutorial here. Please note this document changes a little as we get questions to improve readability, more information is available on the website.

I tried to join the SPARC group in, but I can’t find it?

The SPARC group is private so you need to be added as a member. Anita Bandrowski is the administrator. Please send her a message at her user account:, including the email address you used to set up the protocols account.

How do I upload large data to Pennsieve?

Large file upload (10GB and greater) is supported using the web-application, as well as using the Pennsieve Agent. More information can be found here.

Who do I contact at Pennsieve if I need support or assistance uploading my data?

For all support questions about the platform, feedback, or bug reports, please use the “Get Help” button after logging into the Pennsieve Platform to send a support request. This is the best method to reach the Pennsieve support and development team, and will be prioritized within Pennsieve to guarantee a quick response. For more general questions about the SPARC effort, and the role of the Pennsieve as the SPARC Data Core, please contact Dr. Leonardo Guercio ( or Dr.Joost Wagenaar ( by email or Slack (using the SPARC Slack account).

How do I know how to structure my files for SPARC?

SPARC has developed a standard file structure for organizing and naming files, inspired by the BIDS (Brain Imaging Data Structure). Instructions and examples are available. NOTE: Tabular data should be saved as .CSV files to make sure data is readable without the need for Microsoft Office.

Why follow the SPARC file structure?

The SPARC file structure is based on BIDS, an extensible standard developed for organizing data and metadata for neuroimaging, but which is also applicable to other domains. SPARC is currently evaluating how far BIDS can be extended to cover the data types and workflows used in SPARC, but it should be employed where possible. The BIDS standards have been endorsed by the International Neuroinformatics Coordinating Facility as fulfilling the requirements for a community standard that can and should be widely adopted across neuroscience. The use of community standards is a key pillar of FAIR.

The benefits for SPARC for using a structured file structure are as follows:

  • It will be easy for another researcher to work on your data.
  • To understand the organization of the files and their format you will only need to refer them to the SPARC documentation. This is important not only for consumers of SPARC data, but in your own lab as well (“Future you”).
  • By using the SPARC file structure, you will save time trying to understand and reuse data acquired by a graduate student or postdoc that has already left the lab.

As an accepted community standard, there is a growing number of data analysis software packages that can understand data organized according to BIDS thereby increasing the usability of your data. Databases such as, LORIS, COINS, XNAT, SciTran and others will accept and export datasets organized according to BIDS. Thus, SPARC data can be combined more easily with data accruing in other projects. Validation tools can be developed that can automatically check dataset integrity and let you and the curation team easily spot missing values.

Once the data and protocol are made public after the embargo period, will I still be able to publish papers using these data and protocol? i.e., do journals consider release of data and protocols to be prior publication?

SPARC takes the position, consistent with stated policies on data sharing and preprint deposition of publishers and journals, that release of data through the SPARC Data Portal and deposition of associated protocols in does not preclude a researcher from publishing works that utilize or further describe either the data or the protocol. Publishers and journals are actively behind data sharing and many are either requiring or recommending that data be made available in a public repository at time of publication. Authors are to include a data availability statement that includes the DOI or URL of the data set deposited. Publishers and many journals have made explicit statements stating that deposition of a manuscript in a preprint service like biorXiv does not constitute prior publication. We agree with that making the protocol available through ahead of publication is covered under that policy and will not interfere with submission of any articles utilizing such protocols.

Why should I get an o²S²PARC account?

An o²S²PARC account allows you to create your own model, simulation or data analysis pipelines or edit the pipelines available from the SPARC Portal as published datasets. For more information on everything o²S²PARC can do, take a look at and get an account by requesting one from (it’s free and painless).