distribution_discretization

distribution_discretization#

from particula.util import distribution_discretization
help(distribution_discretization)
Help on module particula.util.distribution_discretization in particula.util:

NAME
    particula.util.distribution_discretization - discretization of the distribution of the particles

FUNCTIONS
    discretize(interval=None, disttype='lognormal', gsigma=array(1.25), mode=<Quantity(1e-07, 'meter')>, nparticles=array(100000.), **kwargs)
        discretize the distribution of the particles
        
        Args:
            interval    (float) the size interval of the distribution
            distype     (str)   the type of distribution, "lognormal" for now
            gsigma      (float) geometric standard deviation of distribution
            mode        (float) pdf scale (corresponds to mode in lognormal)

DATA
    lognorm = <scipy.stats._continuous_distns.lognorm_gen object>
        A lognormal continuous random variable.
        
        As an instance of the `rv_continuous` class, `lognorm` object inherits from it
        a collection of generic methods (see below for the full list),
        and completes them with details specific for this particular distribution.
        
        Methods
        -------
        rvs(s, loc=0, scale=1, size=1, random_state=None)
            Random variates.
        pdf(x, s, loc=0, scale=1)
            Probability density function.
        logpdf(x, s, loc=0, scale=1)
            Log of the probability density function.
        cdf(x, s, loc=0, scale=1)
            Cumulative distribution function.
        logcdf(x, s, loc=0, scale=1)
            Log of the cumulative distribution function.
        sf(x, s, loc=0, scale=1)
            Survival function  (also defined as ``1 - cdf``, but `sf` is sometimes more accurate).
        logsf(x, s, loc=0, scale=1)
            Log of the survival function.
        ppf(q, s, loc=0, scale=1)
            Percent point function (inverse of ``cdf`` --- percentiles).
        isf(q, s, loc=0, scale=1)
            Inverse survival function (inverse of ``sf``).
        moment(order, s, loc=0, scale=1)
            Non-central moment of the specified order.
        stats(s, loc=0, scale=1, moments='mv')
            Mean('m'), variance('v'), skew('s'), and/or kurtosis('k').
        entropy(s, loc=0, scale=1)
            (Differential) entropy of the RV.
        fit(data)
            Parameter estimates for generic data.
            See `scipy.stats.rv_continuous.fit <https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.fit.html#scipy.stats.rv_continuous.fit>`__ for detailed documentation of the
            keyword arguments.
        expect(func, args=(s,), loc=0, scale=1, lb=None, ub=None, conditional=False, **kwds)
            Expected value of a function (of one argument) with respect to the distribution.
        median(s, loc=0, scale=1)
            Median of the distribution.
        mean(s, loc=0, scale=1)
            Mean of the distribution.
        var(s, loc=0, scale=1)
            Variance of the distribution.
        std(s, loc=0, scale=1)
            Standard deviation of the distribution.
        interval(confidence, s, loc=0, scale=1)
            Confidence interval with equal areas around the median.
        
        Notes
        -----
        The probability density function for `lognorm` is:
        
        .. math::
        
            f(x, s) = \frac{1}{s x \sqrt{2\pi}}
                      \exp\left(-\frac{\log^2(x)}{2s^2}\right)
        
        for :math:`x > 0`, :math:`s > 0`.
        
        `lognorm` takes ``s`` as a shape parameter for :math:`s`.
        
        The probability density above is defined in the "standardized" form. To shift
        and/or scale the distribution use the ``loc`` and ``scale`` parameters.
        Specifically, ``lognorm.pdf(x, s, loc, scale)`` is identically
        equivalent to ``lognorm.pdf(y, s) / scale`` with
        ``y = (x - loc) / scale``. Note that shifting the location of a distribution
        does not make it a "noncentral" distribution; noncentral generalizations of
        some distributions are available in separate classes.
        
        Suppose a normally distributed random variable ``X`` has  mean ``mu`` and
        standard deviation ``sigma``. Then ``Y = exp(X)`` is lognormally
        distributed with ``s = sigma`` and ``scale = exp(mu)``.
        
        Examples
        --------
        >>> import numpy as np
        >>> from scipy.stats import lognorm
        >>> import matplotlib.pyplot as plt
        >>> fig, ax = plt.subplots(1, 1)
        
        Calculate the first four moments:
        
        >>> s = 0.954
        >>> mean, var, skew, kurt = lognorm.stats(s, moments='mvsk')
        
        Display the probability density function (``pdf``):
        
        >>> x = np.linspace(lognorm.ppf(0.01, s),
        ...                 lognorm.ppf(0.99, s), 100)
        >>> ax.plot(x, lognorm.pdf(x, s),
        ...        'r-', lw=5, alpha=0.6, label='lognorm pdf')
        
        Alternatively, the distribution object can be called (as a function)
        to fix the shape, location and scale parameters. This returns a "frozen"
        RV object holding the given parameters fixed.
        
        Freeze the distribution and display the frozen ``pdf``:
        
        >>> rv = lognorm(s)
        >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')
        
        Check accuracy of ``cdf`` and ``ppf``:
        
        >>> vals = lognorm.ppf([0.001, 0.5, 0.999], s)
        >>> np.allclose([0.001, 0.5, 0.999], lognorm.cdf(vals, s))
        True
        
        Generate random numbers:
        
        >>> r = lognorm.rvs(s, size=1000)
        
        And compare the histogram:
        
        >>> ax.hist(r, density=True, bins='auto', histtype='stepfilled', alpha=0.2)
        >>> ax.set_xlim([x[0], x[-1]])
        >>> ax.legend(loc='best', frameon=False)
        >>> plt.show()
        
        
        The logarithm of a log-normally distributed random variable is
        normally distributed:
        
        >>> import numpy as np
        >>> import matplotlib.pyplot as plt
        >>> from scipy import stats
        >>> fig, ax = plt.subplots(1, 1)
        >>> mu, sigma = 2, 0.5
        >>> X = stats.norm(loc=mu, scale=sigma)
        >>> Y = stats.lognorm(s=sigma, scale=np.exp(mu))
        >>> x = np.linspace(*X.interval(0.999))
        >>> y = Y.rvs(size=10000)
        >>> ax.plot(x, X.pdf(x), label='X (pdf)')
        >>> ax.hist(np.log(y), density=True, bins=x, label='log(Y) (histogram)')
        >>> ax.legend()
        >>> plt.show()
    
    u = <pint.registry.UnitRegistry object>

FILE
    /opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/particula/util/distribution_discretization.py
import inspect
print(inspect.getsource(distribution_discretization))
""" discretization of the distribution of the particles
"""
import numpy as np
from scipy.stats import lognorm
from particula import u
from particula.util.input_handling import in_scalar, in_radius


def discretize(
    interval=None,
    disttype="lognormal",
    gsigma=in_scalar(1.25).m,
    mode=in_radius(100e-9),
    nparticles=in_scalar(1e5).m,
    **kwargs
):
    """ discretize the distribution of the particles

        Args:
            interval    (float) the size interval of the distribution
            distype     (str)   the type of distribution, "lognormal" for now
            gsigma      (float) geometric standard deviation of distribution
            mode        (float) pdf scale (corresponds to mode in lognormal)
    """

    _ = kwargs.get("something")
    if not isinstance(mode, u.Quantity):
        mode = in_radius(mode)

    if interval is None:
        raise ValueError("the 'interval' must be specified!")

    if not isinstance(interval, u.Quantity):
        interval = u.Quantity(interval, " ")

    if disttype != "lognormal":
        raise ValueError("the 'disttype' must be 'lognormal' for now!")

    return ((
        lognorm.pdf(
            x=interval.m,
            s=np.reshape(np.log(gsigma), (np.array([gsigma]).size, 1)),
            scale=np.reshape([mode.m], (np.array([mode.m]).size, 1)),
        ) / interval.u
        * np.reshape([nparticles], (np.array([nparticles]).size, 1))
    ).sum(axis=0) /
        np.array([nparticles]).sum() /
        np.max([np.array([mode.m]).size, np.array([gsigma]).size])
    )