JointEmbedding

Joint Embeddings of Shapes and Images via CNN Image Purification

Download as .zip Download as .tar.gz View on GitHub

Created by Yangyan Li, Hao Su, Charles Ruizhongtai Qi, Leonidas J. Guibas from Stanford University, and Noa Fish, Daniel Cohen-Or from Tel Aviv University.

Introduction

We propose a way to embed 3D shapes and 2D images into a joint embedding space, thus all of the 3D shapes and 2D images become searchable from each other (live demo). The research paper was accepted to SIGGRAPH Asia 2015.

License

JointEmbedding is released under the 4-clause BSD license (the original "BSD License", refer to the LICENSE file for details).

Citing JointEmbedding

If you find JointEmbedding useful in your research, please consider citing:

@article{li2015jointembedding,
    Author = {Li, Yangyan and Su, Hao and Qi, Charles Ruizhongtai and Fish, Noa
        and Cohen-Or, Daniel and Guibas, Leonidas J.},
    Title = {Joint Embeddings of Shapes and Images via CNN Image Purification},
    Journal = {ACM Trans. Graph.},
    Year = {2015}
}

Contents

1. Usage: How to test with trained models?

To be added...

2. Usage: How to train your own models?

2.1. Requirements: datasets

2.2. Requirements: software

2.3. Requirements: hardware

2.4. Installation

The code is written by python, matlab and shell. There is no need for any installation of the code itself. Just:

git clone https://github.com/ShapeNet/JointEmbedding.git JointEmbedding;
cd JointEmbedding/src;
cp default_global_variables.py global_variables.py

2.5. Run the pipeline

  1. Edit global_variables.py, especially the ones marked by [take care!]
  2. Execute run_preparation.sh. It will download some 3rd party software, and prepare shell scripts for next steps.
  3. Execute run_shape_embedding_training.sh to generate shape embedding space.
  4. Execute run_image_embedding_training.sh to generate synthetic images.
  5. Execute run_joint_embedding_training.sh to prepare and start the actual process.
Notes
  1. You can run step 3 and 4 in parallel, well, in different machines, since both of them are multi-threaded, and you won't gain much speedup if you run them in parallel in the same machine.
  2. Step 3, 4, and 5 are also very I/O intensive, try large SSD if you have.
  3. The run_\*.sh scripts further divided the tasks into smaller tasks. Feed -f first_step -l last_step parameters to the run_\*.sh scripts to run part of them.
  4. Read the scripts, starting from the run_\*.sh, to get more understanding of the code and build upon it!

3. Questions?

Refer to Frequently Asked Questions first.