{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Examples\n", "\n", "In this example we will be looking at the basic usage of `jaxnnls` to solve a non-negative least squares (NNLS) problem. \n", "\n", "```{warning}\n", "While the algorithm can sometimes work with Jax's default 32-bit precision, it is recommended that you enable 64-bit precision. The Cholesky decompositions used can become unstable at lower precision and lead to `nan` results.\n", "```\n", "\n", "## Basic usage\n", "\n", "To begin we will write a function that randomly generates a non-trivial NNLS system." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "import jax\n", "\n", "# enable 64 bit mode\n", "jax.config.update(\"jax_enable_x64\", True)\n", "\n", "import jax.numpy as jnp\n", "import jaxnnls\n", "\n", "# adjust the print options to make easier to read\n", "jnp.set_printoptions(suppress=True)\n", "\n", "\n", "def generate_random_qp(key, nx):\n", " # split the random key\n", " key_q, key_mask, key_x, key_z = jax.random.split(key, 4)\n", " # make a positive definite Q matrix\n", " Q = jax.random.normal(key_q, (nx, nx))\n", " Q = Q.T @ Q\n", " # make the primal and dual variables (all positive)\n", " x = jnp.abs(jax.random.normal(key_x, (nx,)))\n", " z = jnp.abs(jax.random.normal(key_z, (nx,)))\n", " # mask out 50% of the values to zero\n", " mask = jax.random.choice(key_mask, jnp.array([True, False]), (nx,))\n", " x = jnp.where(mask, x, 0)\n", " z = jnp.where(mask, 0, z)\n", " # make the \"observed\" vector that has x as it's NNLS solution\n", " q = Q @ x - z\n", " return Q, q, x, z" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's make a Jax random key and generates an example system with a 5x5 `Q` matrix." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "_key = jax.random.key(0)\n", "key, _key = jax.random.split(_key)\n", "\n", "Q, q, x, z = generate_random_qp(key, 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next let's find the unconstrained solution using `jnp.linalg.solve`" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[-0.10993943 0.01985061 1.00367504 0.12005477 -0.06829775]\n" ] } ], "source": [ "print(jnp.linalg.solve(Q, q))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can clearly see that this leads to negative values in the solution. Now let's take the same system but use the NNLS solver. If you are only interested in the primal solution (e.g. `x`) we can use `jaxnnls.solve_nnls_primal`. If you want both the primal and dual solutions (along with some extra diagnostic information) you should use `jaxnnls.solve_nnls`." ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0. 0. 0.97650555 0.10741558 0. ]\n" ] } ], "source": [ "jit_solve_nnls_primal = jax.jit(jaxnnls.solve_nnls_primal)\n", "x_solve = jit_solve_nnls_primal(Q, q)\n", "print(x_solve)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can see the solution being found is all positive as desired. We can also check this against the known solution." ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] } ], "source": [ "print(jnp.allclose(x, x_solve))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solving a batch of problems\n", "\n", "The solver is full compatible with `vmap` for solving a system of problems at the same time. First we will generate set of random problems with known solutions." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "key, _key = jax.random.split(_key)\n", "\n", "Qs, qs, xs, zs = jax.vmap(generate_random_qp, in_axes=(0, None))(jax.random.split(key, 20), 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will `jit` and `vmap` the solver and apply it to our set of problems." ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[0. 0. 0.38382659 0.15292546 0. ]\n", " [1.92070267 1.5044806 0.72937339 0.1729202 0.10214948]\n", " [0. 0. 0. 0.25669574 1.20627346]\n", " [0.17402674 0. 0.20067439 0. 1.98034549]\n", " [0. 0. 0. 0. 0. ]\n", " [0.38442547 0. 0.73927296 0. 0.14899361]\n", " [0. 0. 1.09022179 0.28157659 0. ]\n", " [0.53804846 0.55245586 0.5648163 0.41864657 1.42616834]\n", " [0.2257546 0. 0. 0.24268287 1.05370187]\n", " [1.19462445 0. 0. 0. 1.31461407]\n", " [0.05825319 0. 0. 0.025179 0. ]\n", " [0.99083519 0.21995431 0.21457618 0. 1.63248801]\n", " [2.42041109 0. 0.78802546 0. 0. ]\n", " [0.97545337 0.78935263 1.41030076 0.9074219 0. ]\n", " [1.1603355 0. 0.75510333 0. 0.56652302]\n", " [0. 0. 0.54660379 0. 0. ]\n", " [0. 1.52725322 0. 0. 0.09822676]\n", " [0.92809165 0. 0. 0.28206953 0. ]\n", " [0.42286294 0. 0. 0. 0. ]\n", " [0.38076533 0.73707305 0. 0. 1.07173204]]\n" ] } ], "source": [ "batch_nnls = jax.jit(jax.vmap(jaxnnls.solve_nnls_primal, in_axes=(0, 0)))\n", "batch_xs = batch_nnls(Qs, qs)\n", "print(batch_xs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that all the solutions are indeed position as expected. Now let's check if they match the known solutions." ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] } ], "source": [ "print(jnp.allclose(xs, batch_xs))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Differentiating a NNLS\n", "\n", "If we are only looking at the primal solution with `jaxnnls.solve_nnls_primal` we can use automatic differentiation. For this example we will set up a simple loss function and calculated the gradients of that loss with respect to both `Q` and `q`." ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "def loss(Q, q, target_kappa=0):\n", " x = jaxnnls.solve_nnls_primal(Q, q, target_kappa=target_kappa)\n", " x_bar = jnp.ones_like(x)\n", " residual = x - x_bar\n", " return jnp.dot(residual, residual)\n", "\n", "\n", "loss_and_grad = jax.jit(jax.value_and_grad(loss, argnums=(0, 1)))\n", "\n", "l, (dl_dQ, dl_dq) = loss_and_grad(Q, q)" ] }, { "cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3.797258930506071\n" ] } ], "source": [ "print(l)" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[0.00001912 0.00020742 0.00406618 0.00102431 0.00027023]\n", " [0.00020742 0.00110802 0.07179544 0.01015142 0.00138725]\n", " [0.00406618 0.07179544 0.19343259 0.24109751 0.09490007]\n", " [0.00102431 0.01015142 0.24109751 0.05406517 0.01317802]\n", " [0.00027023 0.00138725 0.09490007 0.01317802 0.00173121]]\n" ] } ], "source": [ "print(dl_dQ)" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[-0.00780425 -0.14497006 -0.19736384 -0.46876957 -0.19184031]\n" ] } ], "source": [ "print(dl_dq)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above example we set `target_kappa=0`. This means no smoothing will be applied to the gradients. In general, when dealing with constrained solvers like this, the gradients can be discontinuous. In this example we see that we only have non-zero gradient values for the elements of `x` that are non-zero when solved.\n", "\n", "If we were aiming to minimize our loss using a gradient decent method, we would only be able to move a subset of our parameters at a time because of this. By increasing the `target_kappa` value these discontinuities will be smoothed out, providing more useful information for gradient based optimizers." ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [], "source": [ "l_kappa, (dl_dQ_kappa, dl_dq_kappa) = loss_and_grad(Q, q, target_kappa=1e-3)" ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3.797258930506071\n" ] } ], "source": [ "print(l_kappa)" ] }, { "cell_type": "code", "execution_count": 73, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[0.00001912 0.00020742 0.00406618 0.00102431 0.00027023]\n", " [0.00020742 0.00110802 0.07179544 0.01015142 0.00138725]\n", " [0.00406618 0.07179544 0.19343259 0.24109751 0.09490007]\n", " [0.00102431 0.01015142 0.24109751 0.05406517 0.01317802]\n", " [0.00027023 0.00138725 0.09490007 0.01317802 0.00173121]]\n" ] } ], "source": [ "print(dl_dQ_kappa)" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[-0.00780425 -0.14497006 -0.19736384 -0.46876957 -0.19184031]\n" ] } ], "source": [ "print(dl_dq_kappa)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the loss value has not changed as the smoothing is only applied to the gradients. As for the two gradients, all the values have become non-zero. Now if gradient decent was applied **all** the value would move rather than just a subset of them.\n", "\n", "For more information about the smoothing process please refer to the [qpax paper](https://arxiv.org/abs/2406.11749)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Diagnostic information\n", "\n", "In all the examples above we used `jaxnnls.solve_nnls_primal` as we were only interested in the primal solution. If you want the dual solution or more diagnostic information the `jaxnnls.solve_nnls` function is available." ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [], "source": [ "x, s, z, converged, number_iterations = jaxnnls.solve_nnls(Q, q)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The outputs are:\n", "- `x`: the primal solution\n", "- `s`: the slack variable (will be the same as `x` if the algorithm converged)\n", "- `z`: the dual solution\n", "- `converged`: flag that is `1` if the algorithm converged and `0` otherwise\n", "- `number_iterations`: the number of steps the algorithm took to converged\n", "\n", "```{note}\n", "The code will run a maximum of 50 steps before stopping and reporting it did not converge.\n", "```\n", "\n", "```{note}\n", "Automatic differentiation is only available for `jaxnnls.solve_nnls_primal` not this version of the function.\n", "```" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0. 0. 0.97650555 0.10741558 0. ]\n" ] } ], "source": [ "print(x)" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0. 0. 0.97650555 0.10741558 0. ]\n" ] } ], "source": [ "print(s)" ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.38611367 0.11502944 0. 0. 0.0997703 ]\n" ] } ], "source": [ "print(z)" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1\n" ] } ], "source": [ "print(converged)" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "10\n" ] } ], "source": [ "print(number_iterations)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "pyauto", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 2 }