Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to apply force during for-loop in diffmpm #55

Open
lynamer opened this issue Mar 29, 2022 · 3 comments
Open

How to apply force during for-loop in diffmpm #55

lynamer opened this issue Mar 29, 2022 · 3 comments

Comments

@lynamer
Copy link

lynamer commented Mar 29, 2022

Take the diffmpm_simple.py as an example, I was thinking to add an additional force during the p2g step, that is,
(at line 87)
grid_v_in[f, base + offset] += weight * (p_mass * v[f, p] - dt * x.grad[f, p] + affine @ dpos)
where x.grad is supposed to be $\partial (total_energy) / \partial x$
However, since we already use ti.Tape(loss=loss) to store the gradient of init_v, I am wondering how to get both $\partial (total_energy) / \partial x$ in the for-loop and $\partial (loss) / \partial init_v$ after the loop.

@lynamer lynamer changed the title How to apply force in during for-loop in diffmpm How to apply force during for-loop in diffmpm Mar 29, 2022
@lynamer
Copy link
Author

lynamer commented Mar 31, 2022

Here are some more detailed information about my question, I first tried the Nested ti.Tape() to record both the total_energy and loss:

with ti.Tape(loss=loss):
    set_v()
    for s in range(steps - 1):
        with ti.Tape(loss=total_energy):
            compute_total_energy(s)
        substep(s)

    compute_x_avg()
    compute_loss()

Then there was no output in the program

Then I tried to use func.grad() instead of the AutoDiff function:

# foward
for s in range(total_steps - 1):
    clear_grid()
    total_energy[None] = 0
    compute_total_energy(s)
    total_energy.grad[None] = 1
    p2g(s)
    grid_op(s)
    g2p(s)
x_avg[None] = [0, 0]
compute_x_avg()
compute_loss()

# backward
clear_particle_grad()
compute_loss.grad()
compute_x_avg.grad()
for s in reversed(range(steps - 1)):
    clear_grid()
    p2g(s)
    grid_op(s)
    g2p.grad(s)
    grid_op.grad(s)
    p2g.grad(s)

This still could not work since all the gradients were zero.

@k-ye
Copy link
Member

k-ye commented Mar 31, 2022

cc @erizmr

@erizmr
Copy link
Collaborator

erizmr commented Mar 31, 2022

Hi @lynamer ,
I think nested tape is not how it designed to use for (i.e., may induce unexpected behavior).
The tape is designed for the scenario with one loss. For multiple losses, I would recommend to use kernel.grad(), i.e. reverse the call trace manually. Here is the code I modified according to your case:

    # Forward
    set_v()
    for s in range(steps - 1):
        # total_energy.grad[None] = 1
        compute_total_energy(s)
        # compute_total_energy.grad(s)
        substep(s)
    x_avg[None] = [0, 0]
    compute_x_avg()
    compute_loss()

    # Backward
    loss.grad[None] = 1
    total_energy.grad[None] = 1
    clear_particle_grad()
    compute_loss.grad()
    compute_x_avg.grad()

    for s in reversed(range(steps - 1)):
        g2p.grad(s)
        grid_op.grad(s)
        p2g.grad(s)
        compute_total_energy.grad(s)
    set_v.grad()
        

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants