luz.modules module¶
Custom PyTorch modules.
- class AdditiveAttention(d, d_attn, activation=None)¶
Bases:
torch.nn.modules.module.ModuleAdditive attention, from https://arxiv.org/abs/1409.0473.
Additive attention, from https://arxiv.org/abs/1409.0473.
- Parameters
d (
int) – Feature length.d_attn (
int) – Attention vector length.activation (
Optional[Callable[[Tensor],Tensor]]) – Activation function, by default None.
- forward(s, h, mask=None)¶
Compute forward pass.
- Parameters
s (
Tensor) – Shape: \((N,d)\)h (
Tensor) – Shape: \((N,d)\)mask (
Optional[Tensor]) – Mask tensor, by default None.
- Returns
Output tensor. Shape: \((1,N)\)
- Return type
torch.Tensor
- training: bool¶
- class AdditiveNodeAttention(d, d_attn, activation=None)¶
Bases:
torch.nn.modules.module.ModuleAdditive node attention on graphs. From https://arxiv.org/abs/1710.10903.
- Parameters
d (
int) – Node feature length.d_attn (
int) – Attention vector length.activation (
Optional[Callable[[Tensor],Tensor]]) – Activation function, by default None.
- forward(nodes, edge_index)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)
- Returns
Output tensor. Shape: \((N_{edges},N_{nodes})\)
- Return type
torch.Tensor
- training: bool¶
- class ApplyFunction(f)¶
Bases:
torch.nn.modules.module.ModuleInitializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Compute forward pass.
- Parameters
x (
Tensor) – Input tensor.- Returns
Output tensor.
- Return type
torch.Tensor
- training: bool¶
- class AverageGraphPool(num_clusters)¶
Bases:
torch.nn.modules.module.ModuleInitializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(nodes, edges, edge_index, batch, assignment)¶
Pool graph by average node clustering.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edges (
Tensor) – Edge features. Shape: \((N_{edges},d_e)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)batch (
Tensor) – Nodewise batch tensor. Shape: \((N_{nodes},)\)assignment (
Tensor) – Soft cluster assignment tensor. Shape: \((N_{nodes},N_{clusters})\)
- Return type
tuple[Tensor,Tensor,Tensor]- Returns
torch.Tensor – Pooled node features. Shape: \((N_{nodes}',d_v)\)
torch.Tensor – Pooled edge features. Shape: \((N_{edges}',d_e)\)
torch.Tensor – Pooled edge index tensor. Shape: \((2,N_{edges}')\)
- training: bool¶
- class Concatenate(dim=0)¶
Bases:
torch.nn.modules.module.ModuleConcatenate tensors along a given dimension.
- Parameters
dim (
Optional[int]) – Concenation dimension, by default 0
- forward(*tensors)¶
Compute forward pass.
- Parameters
*args – Input tensors. Shape: \((N,*)\)
- Returns
Output tensor.
- Return type
torch.Tensor
- training: bool¶
- class Dense(*features, bias=True, activation=None)¶
Bases:
torch.nn.modules.module.ModuleDense feed-forward neural network.
- Parameters
*features – Number of features at each layer.
bias (
Optional[bool]) – If False, each layer will not learn an additive bias; by default True.activation (
Optional[Callable[[Tensor],Tensor]]) – Activation function.
- forward(x)¶
Compute forward pass.
- Parameters
x (
Tensor) – Input tensor. Shape: \((N, *, H_{in})\)- Returns
Output tensor. Shape: \((N, *, H_{out})\)
- Return type
torch.Tensor
- training: bool¶
- class DenseRNN(input_size, hidden_size, output_size)¶
Bases:
torch.nn.modules.module.ModuleInitializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Compute forward pass.
- Parameters
x (
Tensor) – Input tensor.- Returns
Output tensor.
- Return type
torch.Tensor
- training: bool¶
- class DotProductAttention¶
Bases:
torch.nn.modules.module.ModuleScaled dot product attention.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(query, key, mask=None)¶
Compute forward pass.
- Parameters
query (
Tensor) – Query vectors. Shape: \((N_{queries},d_q)\)key (
Tensor) – Key vectors. Shape: \((N_{keys},d_q)\)mask (
Optional[Tensor]) – Mask tensor to ignore query-key pairs, by default None. Shape: \((N_{queries},N_{keys})\)
- Returns
Scaled dot product attention between each query and key vector. Shape: \((N_{queries},N_{keys})\)
- Return type
torch.Tensor
- training: bool¶
- class EdgeAggregateGlobal(d_v, d_e, d_u, d_attn, num_heads=1)¶
Bases:
torch.nn.modules.module.ModuleAggregates graph edges using multihead attention.
- Parameters
d_v (
int) – Node feature length.d_e (
int) – Edge feature length.d_u (
int) – Global feature length.d_attn (
int) – Attention vector length.num_heads (
Optional[int]) – Number of attention heads.
- forward(nodes, edges, edge_index, u, batch)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)edges (
Tensor) – Edge features, by default None. Shape: \((N_{edges},d_e)\)u (
Tensor) – Global features, by default None. Shape: \((N_{batch},d_u)\)batch (
Tensor) – Nodewise batch tensor, by default None. Shape: \((N_{nodes},)\)
- Returns
Output tensor. Shape: \((N_{batch},d_e)\)
- Return type
torch.Tensor
- training: bool¶
- class EdgeAggregateGlobalHead(d_v, d_e, d_u, d_attn)¶
Bases:
torch.nn.modules.module.ModuleAggregates graph edges using attention.
- Parameters
d_v (
int) – Node feature length.d_e (
int) – Edge feature length.d_u (
int) – Global feature length.d_attn (
int) – Attention vector length.
- forward(nodes, edges, edge_index, u, batch)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edges (
Tensor) – Edge features. Shape: \((N_{edges},d_e)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)u (
Tensor) – Global features. Shape: \((N_{batch},d_u)\)batch (
Tensor) – Nodewise batch tensor. Shape: \((N_{nodes},)\)
- Returns
Output tensor. Shape: \((N_{batch},d_e)\)
- Return type
torch.Tensor
- training: bool¶
- class EdgeAggregateLocal(d_v, d_e, d_u, d_attn, num_heads=1)¶
Bases:
torch.nn.modules.module.ModuleAggregates graph edges using multihead attention.
- Parameters
d_v (
int) – Node feature length.d_e (
int) – Edge feature length.d_u (
int) – Global feature length.d_attn (
int) – Attention vector length.num_heads (
Optional[int]) – Number of attention heads.
- forward(nodes, edges, edge_index, u, batch)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)edges (
Tensor) – Edge features, by default None. Shape: \((N_{edges},d_e)\)u (
Tensor) – Global features, by default None. Shape: \((N_{batch},d_u)\)batch (
Tensor) – Nodewise batch tensor, by default None. Shape: \((N_{nodes},)\)
- Returns
Output tensor. Shape: \((N_{nodes},d_e)\)
- Return type
torch.Tensor
- training: bool¶
- class EdgeAggregateLocalHead(d_v, d_e, d_u, d_attn, nodewise=True)¶
Bases:
torch.nn.modules.module.ModuleAggregates graph edges using attention.
- Parameters
d_v (
int) – Node feature length.d_e (
int) – Edge feature length.d_u (
int) – Global feature length.d_attn (
int) – Attention vector length.
- forward(nodes, edges, edge_index, u, batch)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edges (
Tensor) – Edge features. Shape: \((N_{edges},d_e)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)u (
Tensor) – Global features. Shape: \((N_{batch},d_u)\)batch (
Tensor) – Nodewise batch tensor. Shape: \((N_{nodes},)\)
- Returns
Output tensor. Shape: \((N_{nodes},d_e)\)
- Return type
torch.Tensor
- training: bool¶
- class ElmanRNN(input_size, hidden_size, num_layers=None, nonlinearity=None, bias=None, batch_first=None, dropout=None, bidirectional=None, h0=None)¶
Bases:
torch.nn.modules.module.ModuleInitializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class GraphConv(d_v, activation)¶
Bases:
torch.nn.modules.module.ModuleGraph convolutional network from https://arxiv.org/abs/1609.02907.
- Parameters
d_v (
int) – Node feature length.activation (
Callable[[Tensor],Tensor]) – Activation function.
- forward(nodes, edge_index)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge indices. Shape: \((2,N_{edges})\)
- Returns
Output tensor. Shape: \((N_{nodes},d_v)\)
- Return type
torch.Tensor
- training: bool¶
- class GraphConvAttention(d_v, activation=None)¶
Bases:
torch.nn.modules.module.ModuleCompute node attention weights using graph convolutional network.
- Parameters
d_v (
int) – Node feature length.activation (
Optional[Callable[[Tensor],Tensor]]) – Activation function.
- forward(nodes, edge_index, batch)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge indices. Shape: \((2,N_{edges})\)batch (
Tensor) – Batch indices. Shape: \((N_{nodes},)\)
- Returns
Attention weights. Shape: \((N_{batch},N_{nodes})\)
- Return type
torch.Tensor
- training: bool¶
- class GraphNetwork(edge_model=None, node_model=None, global_model=None, num_layers=1)¶
Bases:
torch.nn.modules.module.ModuleGraph Network from https://arxiv.org/abs/1806.01261.
[summary]
- Parameters
edge_model (
Optional[Module]) – Edge update network, by default None.node_model (
Optional[Module]) – Node update network, by default None.global_model (
Optional[Module]) – Global update network, by default None.num_layers (
Optional[int]) – Number of passes, by default 1.
- forward(nodes, edge_index, edges=None, u=None, batch=None)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)edges (
Optional[Tensor]) – Edge features, by default None. Shape: \((N_{edges},d_e)\)u (
Optional[Tensor]) – Global features, by default None. Shape: \((N_{batch},d_u)\)batch (
Optional[Tensor]) – Nodewise batch tensor, by default None. Shape: \((N_{nodes},)\)
- Return type
Tensor- Returns
torch.Tensor – Output node feature tensor. Shape: \((N_{nodes},d_v)\)
torch.Tensor – Output edge index tensor. Shape: \((2,N_{edges})\)
torch.Tensor – Output edge feature tensor. Shape: \((N_{edges},d_e)\)
torch.Tensor – Output global feature tensor. Shape: \((N_{batch},d_u)\)
torch.Tensor – Output batch tensor. Shape: \((N_{nodes},)\)
- training: bool¶
- class MaskedSoftmax(dim=None)¶
Bases:
torch.nn.modules.module.ModuleCompute softmax of a tensor using a mask.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, mask=None)¶
Compute forward pass.
- Parameters
x (
Tensor) – Argument of softmax.mask (
Optional[Tensor]) – Mask tensor with the same shape as x, by default None.
- Returns
Masked softmax of x.
- Return type
torch.Tensor
- training: bool¶
- class NodeAggregate(d_v, d_u, num_heads=1)¶
Bases:
torch.nn.modules.module.ModuleAggregates graph edges using multihead attention.
- Parameters
d_v (
int) – Node feature length.d_u (
int) – Global feature length.d_attn – Attention vector length.
num_heads (
Optional[int]) – Number of attention heads.
- forward(nodes, edges, edge_index, u, batch)¶
Compute forward pass.
- Parameters
nodes (
Tensor) – Node features. Shape: \((N_{nodes},d_v)\)edge_index (
Tensor) – Edge index tensor. Shape: \((2,N_{edges})\)edges (
Tensor) – Edge features, by default None. Shape: \((N_{edges},d_e)\)u (
Tensor) – Global features, by default None. Shape: \((N_{batch},d_u)\)batch (
Tensor) – Nodewise batch tensor, by default None. Shape: \((N_{nodes},)\)
- Returns
Output tensor. Shape: \((N_{batch},d_v)\)
- Return type
torch.Tensor
- training: bool¶