Skip to content

Inquiry about latent_query_points_type in ModelLAM class #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
coding-kuku opened this issue Apr 17, 2025 · 1 comment
Closed

Inquiry about latent_query_points_type in ModelLAM class #18

coding-kuku opened this issue Apr 17, 2025 · 1 comment

Comments

@coding-kuku
Copy link

LAM is an excellent work, and I am grateful for the contributions your team has made to the community!

I have a question regarding the ModelLAM class in your codebase, specifically about the attribute latent_query_points_type. I noticed that this attribute can be set to either "embedding" or "e2e_flame". From my understanding, "embedding" assigns each FLAME vertex its own learnable embedding, while "e2e_flame" assigns an embedding based on the point's coordinates. Could you please explain why "e2e_flame" was chosen over "embedding"? I did not find this detail discussed in the paper, and I would greatly appreciate any insights you can provide on this decision.

        self.latent_query_points_type = kwargs.get("latent_query_points_type", "e2e_flame")
        if self.latent_query_points_type == "embedding":
            self.num_pcl = num_pcl
            self.pcl_embeddings = nn.Embedding(num_pcl , pcl_dim)
        elif self.latent_query_points_type.startswith("flame"):
            latent_query_points_file = os.path.join(human_model_path, "flame_points", f"{self.latent_query_points_type}.npy")
            pcl_embeddings = torch.from_numpy(np.load(latent_query_points_file)).float()
            print(f"==========load flame points:{latent_query_points_file}, shape:{pcl_embeddings.shape}")
            self.register_buffer("pcl_embeddings", pcl_embeddings)
            self.pcl_embed = PointEmbed(dim=pcl_dim)
        elif self.latent_query_points_type.startswith("e2e_flame"):
            skip_decoder = True
            self.pcl_embed = PointEmbed(dim=pcl_dim)
        else:
            raise NotImplementedError
@ethnhe
Copy link
Collaborator

ethnhe commented Apr 18, 2025

Thanks for your interest in our work. The learnable point embedding can be implemented by "nn.Embedding" or positional embedding with learnable MLP. We use the latter one in the released code. We also mention this in the paragraph above Forumlar (1) in our paper. Using "nn.Embedding" also gets good results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants