We are happy to announce the release of Stan 2.25 is now available! This release cycle is mostly quality of life features for Stan users and developers.

**How to install?**

The 2.25 Cmdstan release is here. Download the tar.gz file, extract it and use it the way you use any Cmdstan release. The updated 2.25 Cmdstan guide is here.

With cmdstanr you can upgrade to the latest release using

```
library(cmdstanr)
install_cmdstan()
```

With cmdstanpy you can upgrade to the latest release using

```
import cmdstanpy
cmdstanpy.install_cmdstan()
```

**Vectorized binary functions**

First, for users, we’ve started adding vectorized binary functions to the language. This means that users can now write code such as

```
matrix[17, 93] u[12];
matrix[17, 93] z[12];
z = pow(u, 2.0);
```

which provides the same results as calling

```
for (k in 1:12) {
for (j in 1:93) {
for (i in 1:17) {
z[i, j, k] = pow(u[i, j, k], 2.0);
}
}
}
```

You can see the full description of these in the 2.25 function documentation. We are still adding a list of the binary functions to the docs though this is a list of the ones available now:

- bessel_first_kind, bessel_second_kind
- beta, lbeta
- binary_log_loss
- binomial_coefficient_log
- choose
- falling_factorial, rising_factorial, log_falling_factorial, log_rising_factorial
- fdim, fmax, fmin, fmod
- gamma_p, gamma_q
- hypot
- ldexp
- lmgamma
- log_diff_exp, log_inv_logit_diff
- log_modified_bessel_first_kind, modified_bessel_first_kind, modified_bessel_second_kind
- multiply_log
- owens_t
- pow

**Improved reliability and minor cmdstan user-facing improvements**

- Allowing
`C0`

in`gaussian_dlm_obs_lpdf`

and`gaussian_dlp_obs_rng`

to now be a positive semidefinite matrix. `binomial_lpmf`

now works more reliably when the probability parameter is 0.0 or 1.0.- We’ve added an option to control the number of significant figures in the Cmdstan output CSV as well as when working with
`stansummary`

. - Users can now download a specific version of stanc3, not only the most recent one.
- We fixed a bug when building the Boost library on MacOS.

**User controlled unnormalized distribution syntax for the **`target +=`

`target +=`

As you are probably aware

```
target += normal_lpdf(x| mu, sigma);
```

and

```
x ~ normal(mu, sigma);
```

behave differently. The functional form and hence `target +=`

includes normalizing constants (like `log√2π`

in `normal_lpdf`

). The sampling statement form (with `~`

) drops normalizing constants and everything else not necessary for MCMC.

We have now added the option of using unnormalized distribution with the `target +=`

syntax as well. This can be done by using the `_lupdf`

or `_lupmf`

suffix. So for example

```
target += normal_lupdf(x| mu, sigma);
```

is now equivalent to the sampling statement above. Documentation for each unnormalized distribution is available in the docs (for example, the normal distribution here).

This feature is especially useful with `reduce_sum`

where the sampling statements cannot be used.

**Simplified makefile acces to C++ compiler optimizations**

The backend Stan Math library is in the middle of a large refactor. Due to some of the changes in the backend, users who utilize the ODE solvers in Stan may see a small performance decrease in some cases.

To fix that you can add `STAN_COMPILER_OPTIMS`

=TRUE to `make/local`

to turn on link-time optimization for Stan which should remove any performance issues. Turning these optimizations on can lead to speedups in other models as well. We are still investigating where and when this is beneficial in order to handle these optimizations automatically in the next release.

**OpenCL support**

Users can now use GLM functions with OpenCL on GPUs for cases where any argument is a parameter, we’ve rewritten them to accept parameters or data for any of their input arguments. The newest release of brms can use the cmdstan backend so it should be easier for users to access these methods.

**Changes in the Stan backend**

The Stan Math backend is undergoing a lot of changes at the moment (we’ve had 99 PRs since the last release!). There are three larger projects that are being lead by Steve Bronder, Tadej Ciglarič, and Ben Bales. These are:

- Better handling and use of Eigen expressions

Almost all functions in the Stan Math library were refactored to handle Eigen expressions and use Eigen expressions internally. This will lead to better efficiency in the future but for some functions we have already observed significant speedups now.

- More efficient matrix algebra

We have reworked some major parts of Stan so that we can be way more efficient at matrix algebra. This is still a work in progress, but you can read more on that in this thread. While this has not been exposed to Stan, we had to make some changes in the backend that can effect current Stan programs. We made sure there was not a serious performance hit to current Stan programs and that the fast stuff we are writing now gives the same numeric answers from our current methods.

- Refactored reverse mode autodiff functions

Tadej figured out a wonderfully nice pattern for writing reverse mode autodiff functions which we call `reverse_pass_callback()`

. `reverse_pass_callback()`

breaks up the fact that reverse mode autodiff is

- Running the regular function
- Saving the data
- Adding a callback to a stack to calculate the adjoints in the reverse pass.

The pattern leads to some rather pretty code. It also leads to 15% speedup or so in some cases which is nice.

We would also like to note that we have put a lot of effort into testing these backend changes. We are running function level performance tests and also check all Math functions for leaks with an address sanitizer. But we still need your help in making sure none of these refactors affected your Stan models. So please try your models and report if you see any improvements or more importantly any performance regressions.

Thank you to all of our users who tested out the 2.25 release candidate! There were a few sanity checks and fixes found that very much helped polish up the release.