Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiply matrix by vector row-wise #1358

Open
nilgoyette opened this issue Jan 29, 2024 · 6 comments
Open

Multiply matrix by vector row-wise #1358

nilgoyette opened this issue Jan 29, 2024 · 6 comments

Comments

@nilgoyette
Copy link
Contributor

How can I multiply a matrix and a vector "row-wise" in nalgebra? Like I would do in numpy

>>> m = np.eye(3)
>>> v = np.array([1, 2, 3])
>>> m * v  # Or v * m
# Gives [[1, 0, 0],
#        [0, 2, 0],
#        [0, 0, 3]]

I understand that I can convert my vector into a square matrix, then use * (or dot) or code a generic function that does this task but multiplying each row with a number will be faster.

I thought there would be be a method/operator that does it, so I asked this question on the r/rust "simple question" post and on SO. It seems that such a method doesn't exist. Can you please confirm?

Moreover, if this is right, should we add this method, and what would be its name?

@tpdickso
Copy link
Collaborator

Can you elaborate on how you want this operation to work? The example you've given doesn't seem quite correct, as far as I can tell; m * v should produce the vector [1, 2, 3]. I'm not sure if having a built-in function would be faster than rolling your own, either, as:

m.row_iter_mut().zip(v.iter()).for_each(|(row, component)| {
    row *= component;
})

should likely do roughly the same thing execution-wise as a built-in method would do; do you have a use-case for this for which this approach benchmarks poorly?

Can you elaborate on the use-case for this per-row componentwise multiplication? This could be a built-in method on Matrix, but there are unfortunately diminishing returns on adding specialized methods; when there are too many methods, they aren't very discoverable and users may end up reaching for the row_iter_mut() approach anyway. So if this is a common operation with a well-understood name then that could strengthen the case for having it as its own method on Matrix, but if it would end up being named something like row_component_mul{_assign} or something then it might end up adding more API surface area than it ends up saving time for downstream users.

@nilgoyette
Copy link
Contributor Author

I have a 3x3 transformation matrix m and a spacing vector (the resolution of the voxel) v. I need to apply the spacing to the matrix, so I thought I could do v * m. I'm not a mathematician myself and I'm not aware if programmers seldomly or frequently do that operation. I only know that numpy does it.

I agree with both your points:

  • I can roll my own
  • not very discoverable to add a new method

I was just asking if it's a good idea. However, now that I think about it more. There's no need to add a method. This operation could use the * operator, like in numpy. As I said, I'm no mathematician... Is it "wrong" to define the behavior of matrix * vector and vector * matrix? Should it be forbidden like it is now?

@ThomasdenH
Copy link

The * operator currently does regular matrix multiplication, and I think having it do anything else would be confusing. I think in many cases the multiplication of a 3×3 with a 1×3 vector will indicate a mistake, perhaps the vector should have been transposed? Perhaps it is not wrong to use the * operator for multiple types of multiplications, but it's probably not desirable.

@Andlon
Copy link
Sponsor Collaborator

Andlon commented May 14, 2024

If I understand @nilgoyette correctly, I think this is mathematically best expressed by the multiplication of a diagonal matrix and a matrix, i.e. $\text{diag}(v) M$, where $\text{diag}{\cdot}$ is an operation that forms a diagonal matrix. nalgebra currently doesn't have a specialized matrix type for diagonal matrices, though we have discussed it many times. We might still decide to add this.

I don't think we want to mimic numpy's unintuitive implicit broadcasting features.

EDIT: I think I got the order reversed, it's a ambiguous from @nilgoyette original example, since they're using the identity matrix for $M$.

@ThomasdenH
Copy link

Is there still discussion on what that would look like? Would you accept a PR for it? Something like Vector::<T, R, S>.diag() -> Matrix::<T, R, R, S>?

@Andlon
Copy link
Sponsor Collaborator

Andlon commented May 14, 2024

Is there still discussion on what that would look like? Would you accept a PR for it? Something like Vector::<T, R, S>.diag() -> Matrix::<T, R, R, S>?

We should probably flesh this out in a separate issue, but I'll give you some starting points. However, I think @sebcrozet has been skeptical to adding it in the past, because of the combinatorial explosion in adding more types that need to interact with matrices (and references, mutable references ...). However, I'm personally in favor of at least exploring this option, because I find myself missing diagonal matrices in every project I work on.

I think we might consider something like a wrapper around a vector storage, so it might look like:

pub struct DiagonalMatrix<T, N, S> {
    data: Vector<T, N, S>
}

and then on Vector implement methods like .as_diagonal() (which might wrap a view of the vector in a DiagonalMatrix), .as_diagonal_mut(), and .into_diagonal() which moves it into a diagonal matrix. Perhaps a way to cut down on the combinatorial aspects would be to say that only references to a diagonal matrices interact with (references to?) matrices, and not owned types. This avoids having to add 9 different combinations for each order of the operands, at least, at a small ergonomic cost.

I think a small prototype around this idea shouldn't be too much work, and could help us figure out the trade-offs and design considerations better. If you want, feel free to join us on our Discord server for some casual discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants