Setting up environment for the open source 3D Object Detection project OpenPCDet using Docker

Yunshuang Yuan
4 min readSep 18, 2020

--

This story is just a like a notes and diary for my work and thought. If you just want to check the technical content, please jump the second paragraph.

If you are working with Deep Learning, you might find that the most disgusting thing for making your deep learning models, especially those that you want to reimplement, develop or extend based on some baseline works, to work fine is the configuration of your environment for running these projects. Let’s metaphoring environment configuration as a routing problem in a mountainous region. You know your location of start end goal points in a specific coordinate system, but you do not have a terrain map. So you roughly know which direction to go, but have no idea what would happen on the way and how many mountains you are going to climb. You might confront a very high mountain, if you are strong enough, you might make it through. But if you are not, you might quit climbing it at the mountain foot or half way up the mountain and then make a detour. You are playing roll the punches all the way till you reach the goal. Life is hard, right? But, c’est la vie! In our life, it is so, for optimizing a deep learning model it is so, when configuring the environment for running the model it is so, it is always so! No complain, just do it!

I am using Ubuntu 16.04 and Nvidia GTX 1080Ti. To make the reading more efficient, I split the configuration I have down into steps.

  1. Make sure you have installed the correct version of Nvidia driver to be compatible with your GPU and CUDA. I aim to use CUDA 10.1 and installed driver version 450. For installing Nvidia driver in Ubuntu here is a simple instruction.
  2. Follow the official docker documentation to install the newest version of docker(I installed 19.03.12, in order to support nvidia-docker2, Docker >= 19.03 should be installed), test if it is successfully installed. You may want to do some post-install avoiding using sudo everytime when running docker.
  3. Follow the Nvidia docker installation guide to install nvidia GPU support for docker.
  4. Edit your docker deamon configuration json file(/etc/docker/deamon.json) to tell docker deamon to use nvida gpu runtime when building a docker image or running a container. The content should be like this
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
An impression for docker and docker deamon

Now, you are ready to build docker image and run container with cuda suport.

5. Since there are many issues when build the module “spconv”, I used the docker image scrin/dev-spconv provided by the author. And build layers on top of this image. All I did is to write a Dockerfile with the following content and build the image. The image is upload to opheliayuan/pcdet:3.0.

FROM scrin/dev-spconv:latestWORKDIR /usr/srcRUN sudo apt-get updateRUN git clone https://github.com/open-mmlab/OpenPCDet.gitWORKDIR /usr/src/OpenPCDetRUN pip install -r requirements.txtRUN python ./setup.py develop 

6. Finally, you can run the image you just built anywhere with the following command. The meaning of arguments is also listed below.

docker run -it -v /path/to/data/on/host:/usr/src/OpenPCDet/data/kitti --rm --gpus all opheliayuan/pcdet:3.0 nvidia-smi

-it: i refers to interact and “Keep STDIN open even if not attached” according to the original definition, which means attach docker to both stdout and stderr. According to my understanding, you can access the container with this option via command lines and also see the output of the programs you are running. t refers to pseudo-TTY, set this flag will ask docker to allocate pseudo-TTY for you to interact with the container via keyboard.

  • -v: configure docker volume, this can map a folder in the host filesystem to a specific mount point in the container. Here I use kitti as an example. You can order the data with the following structure and map the kitti data folder to /usr/src/OpenPCDet/data/kitti
  • - -rm: will automatically clean up the container and remove the file system when the container exits, this can prevent the container file system from piling up.
  • --gpus: set the gpu index you want to use, “all” means use all GPUs.

--

--