================================ Usage ================================ .. contents:: Table of Contents Execution ================================ .. _Dockerhub : https://hub.docker.com/r/neurodata/m2g/ .. _documentation : https://docs.docker.com/ In order to share data between our container and the rest of our machine in Docker, we need to mount a volume. Docker does this with the -v flag. Docker expects its input formatted as: ``-v path/to/local/data:/path/in/container``. We'll do this when we launch our container, as well as give it a helpful name so we can locate it later on. The ``neurodata/m2g`` Docker container enables users to run end-to-end connectome estimation on structural MRI or functional MRI right from container launch. The pipeline requires that data be organized in accordance with the BIDS spec. If the data you wish to process is available on S3 you simply need to provide your s3 credentials at build time and the pipeline will auto-retrieve your data for processing. If you have never used Docker before, it is useful to run through the Docker documentation_. **Getting Docker container**:: $ docker pull neurodata/m2g Structural Connectome Pipeline (`m2g-d`) ---------------------------------------- The structural connectome pipeline can be ran with:: $ m2g --pipeline dwi We recommend specifying an atlas and lowering the default seed density on test runs (although, for real runs, we recommend using the default seeding -- lowering seeding simply decreases runtime):: $ m2g --pipeline dwi --seeds 1 --parcellation Desikan You can set a particular scan and session as well (recommended for batch scripts):: $ m2g --pipeline dwi --seeds 1 --parcellation Desikan --participant_label