跳转至

fedopt

FedOpt(learning_rate=0.1, betas=(0.9, 0.999), t=0.001)

Implementation based on https://arxiv.org/abs/2003.00295.

Attributes:

Name Type Description
learning_rate float

learning rate. Defaults to 0.1.

betas Tuple[float, float]

coefficients used for computing

t float

adaptivity parameter. Defaults to 0.001.

Source code in iflearner/business/homo/strategy/opt/fedopt.py
def __init__(
    self,
    learning_rate: float = 0.1,
    betas: Tuple[float, float] = (0.9, 0.999),
    t: float = 0.001,
) -> None:
    self._lr = learning_rate
    self._beta1 = betas[0]
    self._beta2 = betas[1]
    self._adaptivity = t
    self._params: dict = {}

set_params(params)

set params to self._params.

Parameters:

Name Type Description Default
params _type_

parameters of server model

required
Source code in iflearner/business/homo/strategy/opt/fedopt.py
def set_params(self, params):
    """set params to self._params.

    Args:
        params (_type_): parameters of server model
    """
    self._params = params

step(pseudo_gradient)

a step to optimize parameters of server model with pseudo gradient.

Parameters:

Name Type Description Default
pseudo_gradient Dict[str, npt.NDArray[np.float32]]

the pseudo gradient of server model

required

Returns:

Type Description
Dict[str, npt.NDArray[np.float32]]

Dict[str, npt.NDArray[np.float32]]: parameters of server model after step

Source code in iflearner/business/homo/strategy/opt/fedopt.py
def step(
    self,
    pseudo_gradient: Dict[str, npt.NDArray[np.float32]],
) -> Dict[str, npt.NDArray[np.float32]]:
    """a step to optimize parameters of server model with pseudo gradient.

    Args:
        pseudo_gradient (Dict[str, npt.NDArray[np.float32]]): the pseudo gradient of server model

    Returns:
        Dict[str, npt.NDArray[np.float32]]: parameters of server model after step
    """
    pass