Skip to content

qfedavg_server

qFedavgServer(num_clients, total_epoch, q=1, learning_rate=0.1)

Bases: strategy_server.StrategyServer

Implement the strategy of qFedavg on server side.

Attributes:

Name Type Description
num_clients int

client number

total_epoch int

the epoch number of client trainning

q float

q factor. Defaults to 1.

learning_rate float

learning rate. Defaults to 0.1.

_fs dict

loss values of each client

Source code in iflearner/business/homo/strategy/qfedavg_server.py
def __init__(
    self,
    num_clients: int,
    total_epoch: int,
    q: float = 1,
    learning_rate: float = 0.1,
) -> None:
    super().__init__(num_clients, total_epoch)
    logger.info(f"num_clients: {self._num_clients}, strategy: qFedavg")

    self._q = q
    self._lr = learning_rate
    self._params: dict = {}
    self._fs: dict = {}

norm_grad(grad)

normalize the grad.

Parameters:

Name Type Description Default
grad Dict[str, Dict]

grad

required

Returns:

Name Type Description
_type_

the normalized grad

Source code in iflearner/business/homo/strategy/qfedavg_server.py
def norm_grad(self, grad: Dict[str, Dict]):
    """normalize the grad.

    Args:
        grad (Dict[str, Dict]): grad

    Returns:
        _type_: the normalized grad
    """
    sum_grad = 0
    for v in grad.values():
        sum_grad += np.sum(np.square(v))  # type: ignore
    return sum_grad

step(deltas, hs)

a optimized step for deltas.

Parameters:

Name Type Description Default
deltas Dict[str, Dict]

the delta of model parameters

required
hs Dict[str, float]

demominator

required

Returns:

Name Type Description
_type_

new parameters after optimizing the deltas

Source code in iflearner/business/homo/strategy/qfedavg_server.py
def step(self, deltas: Dict[str, Dict], hs: Dict[str, float]):
    """a optimized step for deltas.

    Args:
        deltas (Dict[str, Dict]): the delta of model parameters
        hs (Dict[str, float]): demominator

    Returns:
        _type_: new parameters after optimizing the deltas
    """
    demominator = sum(hs.values())
    updates: dict = {}
    for client_delta in deltas.values():
        for param_name, param in client_delta.items():
            updates[param_name] = updates.get(param_name, 0) + param / demominator
    new_param = {}
    for param_name, param in self._params.items():
        new_param[param_name] = param.reshape((-1)) - updates[param_name]
        self._params[param_name] = new_param[param_name].reshape(param.shape)
    return new_param