From 20c9ac0a018255fd56bbeca739bd1306082659e2 Mon Sep 17 00:00:00 2001 From: TomasPegado Date: Mon, 3 Feb 2025 11:51:53 -0300 Subject: [PATCH 1/4] docs: add a Getting Started Section --- nx/guides/getting_started/introduction.md | 86 +++++ nx/guides/getting_started/quickstart.livemd | 395 ++++++++++++++++++++ nx/mix.exs | 3 + 3 files changed, 484 insertions(+) create mode 100644 nx/guides/getting_started/introduction.md create mode 100644 nx/guides/getting_started/quickstart.livemd diff --git a/nx/guides/getting_started/introduction.md b/nx/guides/getting_started/introduction.md new file mode 100644 index 0000000000..8a3586a86c --- /dev/null +++ b/nx/guides/getting_started/introduction.md @@ -0,0 +1,86 @@ +# What is Nx? + +Nx is the numerical computing library of Elixir. Since Elixir´s primary numerical datatypes and structures are not optimized for numerical programming, Nx is the fundamental package built to bridge this gap. + +[Elixir Nx](https://github.com/elixir-nx/nx) smoothly integrate to typed, multidimensional data implemented on other +platforms (called [tensors](introduction.html#what-are-tensors)). This support extends to the compilers and +libraries that support those tensors. Nx has four primary capabilities: + +- In Nx, tensors hold typed data in multiple, named dimensions. +- Numerical definitions, known as `defn`, support custom code with + tensor-aware operators and functions. +- [Automatic differentiation](https://arxiv.org/abs/1502.05767), also known as + autograd or autodiff, supports common computational scenarios + such as machine learning, simulations, curve fitting, and probabilistic models. +- Broadcasting, which is term for element-by-element operations. Most of the Nx operations + automatically broadcast using an effective algorithm. You can see more on broadcast + [here.](intro-to-nx.html#broadcasts) + +Here's more about each of those capabilities. Nx tensors can hold +unsigned integers (u2, u4, u8, u16, u32, u64), +signed integers (s2, s4, s8, s16, s32, s64), +floats (f32, f64), brain floats (bf16), and complex (c64, c128). +Tensors support backends implemented outside of Elixir, including Google's +Accelerated Linear Algebra (XLA) and LibTorch. + +Numerical definitions have compiler support to allow just-in-time compilation +that support specialized processors to speed up numeric computation including +TPUs and GPUs. + +## What are Tensors? + +In Nx, we express multi-dimensional data using typed tensors. Simply put, +a tensor is a multi-dimensional array with a predetermined shape and +type. To interact with them, Nx relies on tensor-aware operators rather +than `Enum.map/2` and `Enum.reduce/3`. + +It allows us to work with the central theme in numerical computing, systems of equations, +which are often expressed and solved with multidimensional arrays. + +For example, this is a two dimensional array: + +$$ +\begin{bmatrix} + 1 & 2 \\\\ + 3 & 4 +\end{bmatrix} +$$ + +As elixir programmers, we can typically express a similar data structure using a list of lists, +like this: + +```elixir +[ + [1, 2], + [3, 4] +] +``` + +This data structure works fine within many functional programming +algorithms, but breaks down with deep nesting and random access. + +On top of that, Elixir numeric types lack optimization for many numerical +applications. They work fine when programs +need hundreds or even thousands of calculations. However, they tend to break +down with traditional STEM applications when a typical problem +needs millions of calculations. + +To solve for this, we can simply use Nx tensors, for example: + +```elixir +Nx.tensor([[1,2],[3,4]]) + +Output: +#Nx.Tensor< +s32[2][2] +[ +[1, 2], +[3, 4] +] + +``` + +To know Nx, we'll get to know tensors first. The following overview will touch +on the major libraries. Then, future notebooks will take a deep dive into working +with tensors in detail, autograd, and backends. Then, we'll dive into specific +problem spaces like Axon, the machine learning library. diff --git a/nx/guides/getting_started/quickstart.livemd b/nx/guides/getting_started/quickstart.livemd new file mode 100644 index 0000000000..f0e0472284 --- /dev/null +++ b/nx/guides/getting_started/quickstart.livemd @@ -0,0 +1,395 @@ +# Nx quickstart + +## Prerequisites + +You will need to know a bit of Elixir. For a refresher, check out the +[Elixir Getting Started Guide](https://hexdocs.pm/elixir/introduction.html). + +To work the examples you can run using the livebook buttom in this page. + +#### Learning Objectives + +This is a overview of Nx tensors. In this section, we'll look at some of the various tools for +creating and interacting with tensors. The IEx helpers will assist our +exploration of the core tensor concepts. + +```elixir +import IEx.Helpers +``` + +After reading, you should be able to understand: + +- Create 1, 2 and N-dimensional tensors in `Nx`; +- How to index, slice and iterate through tensors; +- Basic tensor functions; +- How to apply some linear algebra operations to n-dimensional tensors without using for-loops; +- Axis and shape properties for n-dimensional tensors. + +## The Basics + +Now, everything is set up, so we're ready to create some tensors. + +```elixir +Mix.install([ + {:nx, "~> 0.5"} +]) +``` + +### Creating tensors + +The argument must be one of: + +- a tensor +- a number (which means the tensor is scalar/zero-dimensional) +- a boolean (also scalar/zero-dimensional) +- an arbitrarily nested list of numbers and booleans + +If a new tensor is allocated, it will be allocated in the backend defined by +`Nx.default_backend/0`, unless the `:backend` option is given, which overrides the +default. + +#### Examples + +A number returns a tensor of zero dimensions: + +```elixir +Nx.tensor(0) +``` + +```elixir +Nx.tensor(1.0) +``` + +Giving a list returns a vector (a one-dimensional tensor): + +```elixir +Nx.tensor([1, 2, 3]) +``` + +```elixir +Nx.tensor([1.2, 2.3, 3.4, 4.5]) +``` + +Multi-dimensional tensors are also possible: + +```elixir +Nx.tensor([[1, 2, 3], [4, 5, 6]]) +``` + +```elixir +Nx.tensor([[1, 2], [3, 4], [5, 6]]) +``` + +```elixir +Nx.tensor([[[1, 2], [3, 4], [5, 6]], [[-1, -2], [-3, -4], [-5, -6]]]) +``` + +Tensors can also be given as inputs, which is useful for functions that don´t want to care +about the input kind: + +```elixir +Nx.tensor(Nx.tensor([1, 2, 3])) +``` + +### Naming dimensions + +You can provide names for tensor dimensions. Names are atoms: + +```elixir +Nx.tensor([[1, 2, 3], [4, 5, 6]], names: [:x, :y]) +``` + +Names make your code more expressive: + +```elixir +Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, :height, :width]) +``` + +You can also leave dimension names as `nil`: + +```elixir +Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, nil, nil]) +``` + +However, you must provide a name for every dimension in the tensor. For example, +the following code snippet raises an error: + +```elixir +Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch]) +``` + +### Indexing and Slicing tensor values + +We can get any cell of the tensor: + +```elixir +tensor = Nx.tensor([[1, 2], [3, 4]], names: [:y, :x]) +tensor[[0, 1]] +``` + +```elixir +tensor = Nx.tensor([[1, 2], [3, 4], [5, 6]], names: [:y, :x]) +tensor[[-1, -1]] +``` + +Now, try getting the first row of the tensor: + +```elixir +# ...your code here... +``` + +We can also get a whole dimension: + +```elixir +tensor[x: 1] +``` + +or a range: + +```elixir +tensor[y: 0..1] +``` + +`tensor[[.., 1]]` will achieve the same result as `tensor[x: 1]`. +This is because Elixir has the syntax sugar `..` for a `0..-1//1` range. + +Now, + +- create your own `{3, 3}` tensor with named dimensions +- return a `{2, 2}` tensor containing the first two columns + of the first two rows + +```elixir +# ...your code here... +``` + +### Floats and Complex numbers + +Besides single-precision (32 bits), floats can have other kinds of precision, such as half-precision (16) or +double-precision (64): + +```elixir +Nx.tensor([1, 2, 3], type: :f16) +``` + +```elixir +Nx.tensor([1, 2, 3], type: :f64) +``` + +Brain-floating points are also supported: + +```elixir +Nx.tensor([1, 2, 3], type: :bf16) +``` + +Certain backends and compilers support 8-bit floats. The precision +implementation of 8-bit floats may change per backend, so you must be careful +when transferring data across. The binary backend implements F8E5M2: + +```elixir +Nx.tensor([1, 2, 3], type: :f8) +``` + +In all cases, the non-finite values negative infinity (-Inf), infinity (Inf), +and "not a number" (NaN) can be represented by the atoms `:neg_infinity`, +`:infinity`, and `:nan respectively`: + +```elixir +Nx.tensor([:neg_infinity, :nan, :infinity]) +``` + +Finally, complex numbers are also supported in tensors: + +```elixir +Nx.tensor(Complex.new(1, -1)) +``` + +Check out the documentation for `Nx.tensor/2` for more documentation on the accepted options. + +## Basic operations + +Nx supports element-wise arithmetic operations for tensors and broadcasting when necessary. + +### Addition + +`Nx.add/2`: Adds corresponding elements of two tensors. + +```elixir +a = Nx.tensor([1, 2, 3]) +b = Nx.tensor([0, 1, 2]) +Nx.add(a , b) +``` + +### Subtraction + +`Nx.subtract/2`: Subtracts the elements of the second tensor from the first. + +```elixir +a = Nx.tensor([10, 20, 30]) +b = Nx.tensor([0, 1, 2]) +Nx.subtract(a , b) +``` + +### Multiplication + +`Nx.multiply/2`: Multiplies corresponding elements of two tensors. + +```elixir +a = Nx.tensor([2, 3, 4]) +b = Nx.tensor([0, 1, 2]) +Nx.multiply(a , b) +``` + +### Division + +`Nx.divide/2`: Divides the elements of the first tensor by the second tensor. + +```elixir +a = Nx.tensor([10, 30, 40]) +b = Nx.tensor([5, 6, 8]) +Nx.divide(a , b) +``` + +### Exponentiation + +`Nx.pow/2`: Raises each element of the first tensor to the power of the corresponding element in the second tensor. + +```elixir +a = Nx.tensor([2, 3, 4]) +b = Nx.tensor([2]) +Nx.pow(a , b) +``` + +### Quotient + +`Nx.quotient/2`: Returns a new tensor where each element is the integer division (div/2) of left by right. + +```elixir +a = Nx.tensor([10, 20, 30]) +b = Nx.tensor([3, 7, 4]) + +Nx.quotient(a, b) +``` + +### Remainder + +`Nx.remainder/2`: Computes the remainder of the division of two integer tensors. + +```elixir +a = Nx.tensor([27, 32, 43]) +b = Nx.tensor([2, 3, 4]) +Nx.remainder(a , b) +``` + +### Negation + +`Nx.negate/1`: Negates each element of a tensor. + +```elixir +a = Nx.tensor([2, 3, 4]) +Nx.negate(a) +``` + +### Square Root + +`Nx.sqrt/1`: It computes the element-wise square root of the given tensor. + +```elixir +a = Nx.tensor([4, 9, 16]) +Nx.sqrt(a) +``` + +## Element-Wise Comparison + +Returns 1 when true and 0 when false + +### Equality and Inequality + +`Nx.equal/2`, `Nx.not_equal/2` + +```elixir +a = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4, 9, 16]) +Nx.equal(a, b) +``` + +```elixir +a = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4.0, 9.0, 16.0]) +Nx.not_equal(a, b) +``` + +### Greater and Less + +`Nx.greater/2`, `Nx.less/2` + +```elixir +a = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4, 8, 17]) +Nx.greater(a, b) +``` + +```elixir +a = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4.2, 9.0, 16.7]) +Nx.less(a, b) +``` + +### Greater_Equal and Less_Equal + +`Nx.greater_equal/2`, `Nx.less_equal/2` + +```elixir +a = Nx.tensor([3, 5, 2]) +b = Nx.tensor([2, 5, 4]) + +Nx.greater_equal(a, b) +``` + +```elixir +a = Nx.tensor([3, 5, 2]) +b = Nx.tensor([2, 5, 4]) + +Nx.less_equal(a, b) +``` + +## Aggregate functions + +These operations aggregate values across tensor axes. + +### Sum + +`Nx.sum/1`: Sums all elements + +```elixir +a = Nx.tensor([[4, 9, 16], [4.2, 9.0, 16.7]]) +Nx.sum(a) +``` + +### Mean + +`Nx.mean/1`: Computes the mean value of the tensor + +```elixir +a = Nx.tensor([[4, 9, 16], [4.2, 9.0, 16.7]]) +Nx.mean(a) +``` + +### Product + +`Nx.product/1`: Computes the product of all elements. + +```elixir +a = Nx.tensor([[4, 9, 16], [4.2, 9.0, 16.7]]) +Nx.product(a) +``` + +## Matrix Multiplication + +`Nx.dot/4`: Computes the generalized dot product between two tensors, given the contracting axes.hyunnnn + +```elixir +t1 = Nx.tensor([[1, 2], [3, 4]], names: [:x, :y]) +t2 = Nx.tensor([[10, 20], [30, 40]], names: [:height, :width]) +Nx.dot(t1, [0], t2, [0]) +``` diff --git a/nx/mix.exs b/nx/mix.exs index a43972cf17..6e682112cc 100644 --- a/nx/mix.exs +++ b/nx/mix.exs @@ -58,6 +58,8 @@ defmodule Nx.MixProject do extras: [ "CHANGELOG.md", "guides/intro-to-nx.livemd", + "guides/getting_started/introduction.md", + "guides/getting_started/quickstart.livemd", "guides/advanced/vectorization.livemd", "guides/advanced/aggregation.livemd", "guides/exercises/exercises-1-20.livemd" @@ -112,6 +114,7 @@ defmodule Nx.MixProject do ] ], groups_for_extras: [ + Getting_Started: ~r"^guides/getting_started/", Exercises: ~r"^guides/exercises/", Advanced: ~r"^guides/advanced/" ] From ba529c6bfaf27463beeabd2c1c328b9c44bb2371 Mon Sep 17 00:00:00 2001 From: Paulo Valente <16843419+polvalente@users.noreply.github.com> Date: Tue, 4 Feb 2025 09:26:00 -0300 Subject: [PATCH 2/4] Update introduction.md --- nx/guides/getting_started/introduction.md | 38 ++++++++++------------- 1 file changed, 17 insertions(+), 21 deletions(-) diff --git a/nx/guides/getting_started/introduction.md b/nx/guides/getting_started/introduction.md index 8a3586a86c..d70b48167d 100644 --- a/nx/guides/getting_started/introduction.md +++ b/nx/guides/getting_started/introduction.md @@ -1,30 +1,28 @@ # What is Nx? -Nx is the numerical computing library of Elixir. Since Elixir´s primary numerical datatypes and structures are not optimized for numerical programming, Nx is the fundamental package built to bridge this gap. +Nx is the numerical computing library of Elixir. Since Elixir's primary numerical datatypes and structures are not optimized for numerical programming, Nx is the fundamental package built to bridge this gap. -[Elixir Nx](https://github.com/elixir-nx/nx) smoothly integrate to typed, multidimensional data implemented on other -platforms (called [tensors](introduction.html#what-are-tensors)). This support extends to the compilers and -libraries that support those tensors. Nx has four primary capabilities: +[Elixir Nx](https://github.com/elixir-nx/nx) smoothly integrates typed, multidimensional data called [tensors](introduction.html#what-are-tensors)). +Nx has four primary capabilities: -- In Nx, tensors hold typed data in multiple, named dimensions. +- In Nx, tensors hold typed data in multiple, optionally named dimensions. - Numerical definitions, known as `defn`, support custom code with tensor-aware operators and functions. - [Automatic differentiation](https://arxiv.org/abs/1502.05767), also known as autograd or autodiff, supports common computational scenarios such as machine learning, simulations, curve fitting, and probabilistic models. -- Broadcasting, which is term for element-by-element operations. Most of the Nx operations - automatically broadcast using an effective algorithm. You can see more on broadcast +- Broadcasting, which is a term for element-by-element operations. Most of the Nx operations + make use of automatic implicit broadcasting. You can see more on broadcasting [here.](intro-to-nx.html#broadcasts) -Here's more about each of those capabilities. Nx tensors can hold -unsigned integers (u2, u4, u8, u16, u32, u64), +Nx tensors can hold unsigned integers (u2, u4, u8, u16, u32, u64), signed integers (s2, s4, s8, s16, s32, s64), -floats (f32, f64), brain floats (bf16), and complex (c64, c128). -Tensors support backends implemented outside of Elixir, including Google's -Accelerated Linear Algebra (XLA) and LibTorch. +floats (f8, f16, f32, f64), brain floats (bf16), and complex (c64, c128). +Tensors support backends implemented outside of Elixir, such as Google's +Accelerated Linear Algebra (XLA) and PyTorch. -Numerical definitions have compiler support to allow just-in-time compilation -that support specialized processors to speed up numeric computation including +Numerical definitions provide compiler support to allow just-in-time compilation +targetting specialized processors to speed up numeric computation including TPUs and GPUs. ## What are Tensors? @@ -74,13 +72,11 @@ Output: #Nx.Tensor< s32[2][2] [ -[1, 2], -[3, 4] + [1, 2], + [3, 4] ] - ``` -To know Nx, we'll get to know tensors first. The following overview will touch -on the major libraries. Then, future notebooks will take a deep dive into working -with tensors in detail, autograd, and backends. Then, we'll dive into specific -problem spaces like Axon, the machine learning library. +To learn Nx, we'll get to know tensors first. The following overview will touch +on the major features. The advanced section of the documentation will take a deep dive into working +with tensors in detail, autodiff, and backends. From 0a97e25bb462a6b290aa5a3a9c0426e2c6678e89 Mon Sep 17 00:00:00 2001 From: Paulo Valente <16843419+polvalente@users.noreply.github.com> Date: Tue, 4 Feb 2025 10:02:48 -0300 Subject: [PATCH 3/4] Update quickstart.livemd --- nx/guides/getting_started/quickstart.livemd | 100 ++++++++++---------- 1 file changed, 52 insertions(+), 48 deletions(-) diff --git a/nx/guides/getting_started/quickstart.livemd b/nx/guides/getting_started/quickstart.livemd index f0e0472284..35ac538e79 100644 --- a/nx/guides/getting_started/quickstart.livemd +++ b/nx/guides/getting_started/quickstart.livemd @@ -2,20 +2,15 @@ ## Prerequisites -You will need to know a bit of Elixir. For a refresher, check out the +To properly use Nx, you will need to know a bit of Elixir. For a refresher, check out the [Elixir Getting Started Guide](https://hexdocs.pm/elixir/introduction.html). -To work the examples you can run using the livebook buttom in this page. +To work on the examples you can run using the "Run in Livebook" button in this page. #### Learning Objectives This is a overview of Nx tensors. In this section, we'll look at some of the various tools for -creating and interacting with tensors. The IEx helpers will assist our -exploration of the core tensor concepts. - -```elixir -import IEx.Helpers -``` +creating and interacting with tensors. After reading, you should be able to understand: @@ -27,7 +22,7 @@ After reading, you should be able to understand: ## The Basics -Now, everything is set up, so we're ready to create some tensors. +First, let's install Nx with `Mix.install`. ```elixir Mix.install([ @@ -35,22 +30,28 @@ Mix.install([ ]) ``` +The `IEx.Helpers` module will assist our exploration of the core tensor concepts. + +```elixir +import IEx.Helpers +``` + ### Creating tensors -The argument must be one of: +The argument for `Nx.tensor/1` must be one of: -- a tensor -- a number (which means the tensor is scalar/zero-dimensional) -- a boolean (also scalar/zero-dimensional) +- a tensor; +- a number (which means the tensor is scalar/zero-dimensional); +- a boolean (also scalar/zero-dimensional); - an arbitrarily nested list of numbers and booleans +- the special atoms `:nan`, `:infinity`, `:neg_infinity`, which represent values not supported by Elixir floats. -If a new tensor is allocated, it will be allocated in the backend defined by -`Nx.default_backend/0`, unless the `:backend` option is given, which overrides the -default. +If a new tensor is allocated, it will be allocated in the backend defined by the `:backend` option. +If it is not provided, `Nx.default_backend/0` will be used instead. #### Examples -A number returns a tensor of zero dimensions: +A number returns a tensor of zero dimensions, also known as a scalar: ```elixir Nx.tensor(0) @@ -60,7 +61,7 @@ Nx.tensor(0) Nx.tensor(1.0) ``` -Giving a list returns a vector (a one-dimensional tensor): +A list returns a one-dimensional tensor, also known as a vector: ```elixir Nx.tensor([1, 2, 3]) @@ -70,7 +71,7 @@ Nx.tensor([1, 2, 3]) Nx.tensor([1.2, 2.3, 3.4, 4.5]) ``` -Multi-dimensional tensors are also possible: +Higher dimensional tensors are also possible: ```elixir Nx.tensor([[1, 2, 3], [4, 5, 6]]) @@ -105,14 +106,14 @@ Names make your code more expressive: Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, :height, :width]) ``` -You can also leave dimension names as `nil`: +You can also leave dimension names as `nil` (which is the default): ```elixir Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, nil, nil]) ``` However, you must provide a name for every dimension in the tensor. For example, -the following code snippet raises an error: +the following code snippet raises an error because 1 name is given, but there are 3 dimensions: ```elixir Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch]) @@ -127,6 +128,9 @@ tensor = Nx.tensor([[1, 2], [3, 4]], names: [:y, :x]) tensor[[0, 1]] ``` +Negative indices will start counting from the end of the axis. +`-1` is the last entry, `-2` the second to last and so on. + ```elixir tensor = Nx.tensor([[1, 2], [3, 4], [5, 6]], names: [:y, :x]) tensor[[-1, -1]] @@ -169,22 +173,22 @@ Besides single-precision (32 bits), floats can have other kinds of precision, su double-precision (64): ```elixir -Nx.tensor([1, 2, 3], type: :f16) +Nx.tensor([0.0, 0.2, 0.4, 1.0], type: :f16) ``` ```elixir -Nx.tensor([1, 2, 3], type: :f64) +Nx.tensor([0.0, 0.2, 0.4, 1.0, type: :f64) ``` -Brain-floating points are also supported: +Brain floats are also supported: ```elixir -Nx.tensor([1, 2, 3], type: :bf16) +Nx.tensor([0.0, 0.2, 0.4, 1.0, type: :bf16) ``` Certain backends and compilers support 8-bit floats. The precision implementation of 8-bit floats may change per backend, so you must be careful -when transferring data across. The binary backend implements F8E5M2: +when transferring data across different backends. The binary backend implements F8E5M2: ```elixir Nx.tensor([1, 2, 3], type: :f8) @@ -192,13 +196,13 @@ Nx.tensor([1, 2, 3], type: :f8) In all cases, the non-finite values negative infinity (-Inf), infinity (Inf), and "not a number" (NaN) can be represented by the atoms `:neg_infinity`, -`:infinity`, and `:nan respectively`: +`:infinity`, and `:nan`, respectively: ```elixir Nx.tensor([:neg_infinity, :nan, :infinity]) ``` -Finally, complex numbers are also supported in tensors: +Finally, complex numbers are also supported in tensors, in both 32-bit and 64-bit precision: ```elixir Nx.tensor(Complex.new(1, -1)) @@ -217,7 +221,7 @@ Nx supports element-wise arithmetic operations for tensors and broadcasting when ```elixir a = Nx.tensor([1, 2, 3]) b = Nx.tensor([0, 1, 2]) -Nx.add(a , b) +Nx.add(a, b) ``` ### Subtraction @@ -227,7 +231,7 @@ Nx.add(a , b) ```elixir a = Nx.tensor([10, 20, 30]) b = Nx.tensor([0, 1, 2]) -Nx.subtract(a , b) +Nx.subtract(a, b) ``` ### Multiplication @@ -237,7 +241,7 @@ Nx.subtract(a , b) ```elixir a = Nx.tensor([2, 3, 4]) b = Nx.tensor([0, 1, 2]) -Nx.multiply(a , b) +Nx.multiply(a, b) ``` ### Division @@ -247,7 +251,7 @@ Nx.multiply(a , b) ```elixir a = Nx.tensor([10, 30, 40]) b = Nx.tensor([5, 6, 8]) -Nx.divide(a , b) +Nx.divide(a, b) ``` ### Exponentiation @@ -257,12 +261,12 @@ Nx.divide(a , b) ```elixir a = Nx.tensor([2, 3, 4]) b = Nx.tensor([2]) -Nx.pow(a , b) +Nx.pow(a, b) ``` ### Quotient -`Nx.quotient/2`: Returns a new tensor where each element is the integer division (div/2) of left by right. +`Nx.quotient/2`: Returns a new tensor where each element is the integer division (`div/2`). ```elixir a = Nx.tensor([10, 20, 30]) @@ -273,7 +277,7 @@ Nx.quotient(a, b) ### Remainder -`Nx.remainder/2`: Computes the remainder of the division of two integer tensors. +`Nx.remainder/2`: Computes the remainder of the integer division. ```elixir a = Nx.tensor([27, 32, 43]) @@ -292,7 +296,7 @@ Nx.negate(a) ### Square Root -`Nx.sqrt/1`: It computes the element-wise square root of the given tensor. +`Nx.sqrt/1`: Computes the element-wise square root. ```elixir a = Nx.tensor([4, 9, 16]) @@ -301,7 +305,7 @@ Nx.sqrt(a) ## Element-Wise Comparison -Returns 1 when true and 0 when false +The following operations returns a u8 tensor where 1 represents `true` and 0 represents `false`. ### Equality and Inequality @@ -309,13 +313,13 @@ Returns 1 when true and 0 when false ```elixir a = Nx.tensor([4, 9, 16]) -b = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4, 9, -16]) Nx.equal(a, b) ``` ```elixir a = Nx.tensor([4, 9, 16]) -b = Nx.tensor([4.0, 9.0, 16.0]) +b = Nx.tensor([4.0, 9.0, -16.0]) Nx.not_equal(a, b) ``` @@ -331,7 +335,7 @@ Nx.greater(a, b) ```elixir a = Nx.tensor([4, 9, 16]) -b = Nx.tensor([4.2, 9.0, 16.7]) +b = Nx.tensor([4.2, 9.0, 15.9]) Nx.less(a, b) ``` @@ -340,15 +344,15 @@ Nx.less(a, b) `Nx.greater_equal/2`, `Nx.less_equal/2` ```elixir -a = Nx.tensor([3, 5, 2]) -b = Nx.tensor([2, 5, 4]) +a = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4, 8, 17]) Nx.greater_equal(a, b) ``` ```elixir -a = Nx.tensor([3, 5, 2]) -b = Nx.tensor([2, 5, 4]) +a = Nx.tensor([4, 9, 16]) +b = Nx.tensor([4.2, 9.0, 15.9]) Nx.less_equal(a, b) ``` @@ -359,7 +363,7 @@ These operations aggregate values across tensor axes. ### Sum -`Nx.sum/1`: Sums all elements +`Nx.sum/1`: Sums all elements. ```elixir a = Nx.tensor([[4, 9, 16], [4.2, 9.0, 16.7]]) @@ -368,7 +372,7 @@ Nx.sum(a) ### Mean -`Nx.mean/1`: Computes the mean value of the tensor +`Nx.mean/1`: Computes the mean value of the elements. ```elixir a = Nx.tensor([[4, 9, 16], [4.2, 9.0, 16.7]]) @@ -377,7 +381,7 @@ Nx.mean(a) ### Product -`Nx.product/1`: Computes the product of all elements. +`Nx.product/1`: Computes the product of the elements. ```elixir a = Nx.tensor([[4, 9, 16], [4.2, 9.0, 16.7]]) @@ -386,7 +390,7 @@ Nx.product(a) ## Matrix Multiplication -`Nx.dot/4`: Computes the generalized dot product between two tensors, given the contracting axes.hyunnnn +`Nx.dot/4`: Computes the generalized dot product between two tensors, given the contracting axes. ```elixir t1 = Nx.tensor([[1, 2], [3, 4]], names: [:x, :y]) From 22aab8f08475805f77206288591d3ea5b687f59e Mon Sep 17 00:00:00 2001 From: TomasPegado Date: Mon, 10 Feb 2025 10:35:53 -0300 Subject: [PATCH 4/4] docs: add installation guide --- nx/guides/getting_started/installation.md | 186 ++++++++++++++++++++ nx/guides/getting_started/quickstart.livemd | 25 ++- nx/mix.exs | 3 +- 3 files changed, 210 insertions(+), 4 deletions(-) create mode 100644 nx/guides/getting_started/installation.md diff --git a/nx/guides/getting_started/installation.md b/nx/guides/getting_started/installation.md new file mode 100644 index 0000000000..f00b2f0655 --- /dev/null +++ b/nx/guides/getting_started/installation.md @@ -0,0 +1,186 @@ +# Installation + +The only prerequisite for installing Nx is Elixir itself. If you don´t have Elixir installed +in your machine you can visit this [intallation page](https://elixir-lang.org/install.html). + +There are several ways to install Nx (Numerical Elixir), depending on your project type and needs. + +## Using Mix in a standardElixir Project + +If you are working inside a Mix project, the recommended way to install Nx is by adding it to your mix.exs dependencies: + +1. Open mix.exs and modify the deps function: + +```elixir +defp deps do + [ + {:nx, "~> 0.5"} # Install the latest stable version + ] +end +``` + +2. Fetch the dependencies, run on the terminal: + +```sh +mix deps.get +``` + +## Installing Nx from GitHub (Latest Development Version) + +If you need the latest, unreleased features, install Nx directly from the GitHub repository. + +1. Modify mix.exs: + +```elixir +defp deps do + [ + {:nx, github: "elixir-nx/nx", branch: "main"} + ] +end + +``` + +2. Fetch dependencies: + +```sh +mix deps.get + +``` + +## Installing Nx in a Standalone Script (Without a Mix Project) + +If you don’t have a Mix project and just want to run a standalone script, use Mix.install/1 to dynamically fetch and install Nx. + +```elixir +Mix.install([:nx]) + +require Nx + +tensor = Nx.tensor([1, 2, 3]) +IO.inspect(tensor) + +``` + +Run the script with: + +```sh +elixir my_script.exs + +``` + +Best for: Quick experiments, small scripts, or one-off computations. + +## Installing the Latest Nx from GitHub in a Standalone Script + +To use the latest development version in a script (without a Mix project): + +```elixir +Mix.install([ + {:nx, github: "elixir-nx/nx", branch: "main"} +]) + +require Nx + +tensor = Nx.tensor([1, 2, 3]) +IO.inspect(tensor) +``` + +Run: + +```sh +elixir my_script.exs + +``` + +Best for: Trying new features from Nx without creating a full project. + +## Installing Nx with EXLA for GPU Acceleration + +To enable GPU/TPU acceleration with Google’s XLA backend, install Nx along with EXLA: + +1. Modify mix.exs: + +```elixir +defp deps do + [ + {:nx, "~> 0.5"}, + {:exla, "~> 0.5"} # EXLA (Google XLA Backend) + ] +end +``` + +2. Fetch dependencies: + +```sh +mix deps.get +``` + +3. Run with EXLA enabled: + +```elixir +EXLA.set_preferred_backend(:tpu) +``` + +Best for: Running Nx on GPUs or TPUs using Google’s XLA compiler. + +## Installing Nx with Torchx for PyTorch Acceleration + +To run Nx operations on PyTorch’s backend (LibTorch): + +1. Modify mix.exs: + +```elixir +defp deps do + [ + {:nx, "~> 0.5"}, + {:torchx, "~> 0.5"} # PyTorch Backend + ] +end + +``` + +2. Fetch dependencies: + +```sh +mix deps.get +``` + +3. Run with EXLA enabled: + +```elixir +Torchx.set_preferred_backend() +``` + +Best for: Deep learning applications with PyTorch acceleration. + +## Installing Nx with OpenBLAS for CPU Optimization + +To optimize CPU performance with OpenBLAS: + +1. Install OpenBLAS (libopenblas): + - Ubuntu/Debian: + ```sh + sudo apt install libopenblas-dev + ``` + - MacOS (using Homebrew): + ```sh + brew install openblas + ``` +2. Modify mix.exs: + +```elixir +defp deps do + [ + {:nx, "~> 0.5"}, + {:openblas, "~> 0.5"} # CPU-optimized BLAS backend + ] +end +``` + +3. Fetch dependencies: + +```sh +mix deps.get +``` + +Best for: Optimizing CPU-based tensor computations. diff --git a/nx/guides/getting_started/quickstart.livemd b/nx/guides/getting_started/quickstart.livemd index 35ac538e79..36064a7971 100644 --- a/nx/guides/getting_started/quickstart.livemd +++ b/nx/guides/getting_started/quickstart.livemd @@ -106,6 +106,8 @@ Names make your code more expressive: Nx.tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]], names: [:batch, :height, :width]) ``` +We created a tensor of the shape `{3, 3}`, and two axes named `height` and `width`. + You can also leave dimension names as `nil` (which is the default): ```elixir @@ -128,7 +130,7 @@ tensor = Nx.tensor([[1, 2], [3, 4]], names: [:y, :x]) tensor[[0, 1]] ``` -Negative indices will start counting from the end of the axis. +Negative indices will start counting from the end of the axis. `-1` is the last entry, `-2` the second to last and so on. ```elixir @@ -167,6 +169,23 @@ Now, # ...your code here... ``` +### Tensor shape and reshape + +```elixir +Nx.shape(tensor) +``` + +We can also create a new tensor with a new shape using `Nx.reshape/2`: + +```elixir +Nx.reshape(tensor, {1, 4}, names: [:batches, :values]) +``` + +This operation reuses all of the tensor data and simply +changes the metadata, so it has no notable cost. + +The new tensor has the same type, but a new shape. + ### Floats and Complex numbers Besides single-precision (32 bits), floats can have other kinds of precision, such as half-precision (16) or @@ -177,13 +196,13 @@ Nx.tensor([0.0, 0.2, 0.4, 1.0], type: :f16) ``` ```elixir -Nx.tensor([0.0, 0.2, 0.4, 1.0, type: :f64) +Nx.tensor([0.0, 0.2, 0.4, 1.0], type: :f64) ``` Brain floats are also supported: ```elixir -Nx.tensor([0.0, 0.2, 0.4, 1.0, type: :bf16) +Nx.tensor([0.0, 0.2, 0.4, 1.0], type: :bf16) ``` Certain backends and compilers support 8-bit floats. The precision diff --git a/nx/mix.exs b/nx/mix.exs index 6e682112cc..90ffa9c51e 100644 --- a/nx/mix.exs +++ b/nx/mix.exs @@ -59,6 +59,7 @@ defmodule Nx.MixProject do "CHANGELOG.md", "guides/intro-to-nx.livemd", "guides/getting_started/introduction.md", + "guides/getting_started/installation.md", "guides/getting_started/quickstart.livemd", "guides/advanced/vectorization.livemd", "guides/advanced/aggregation.livemd", @@ -114,7 +115,7 @@ defmodule Nx.MixProject do ] ], groups_for_extras: [ - Getting_Started: ~r"^guides/getting_started/", + "Getting Started": ~r"^guides/getting_started/", Exercises: ~r"^guides/exercises/", Advanced: ~r"^guides/advanced/" ]