Skip to content

Layers & Pipelines API Reference

HLQuantum borrows the layer / sequential pattern from ML frameworks to let you compose complex quantum circuits from reusable building blocks.

Key Concepts

Class Purpose
Layer Abstract base — every layer implements build() → Circuit.
Sequential Container that chains layers and composes their circuits.
CircuitLayer Wraps an existing Circuit as a layer.
QFTLayer Layer wrapping the QFT algorithm.
GroverLayer Layer wrapping Grover's search.
RealAmplitudes RY + CX variational ansatz (full / linear entanglement).
HardwareEfficientAnsatz RX + RY + CX hardware-efficient ansatz.
QuantumMultiHeadAttention Quantum multi-head attention mechanism.
QuantumTransformerBlock Attention + variational feed-forward block.

Quick Example

from hlquantum.layers import (
    Sequential, CircuitLayer, QFTLayer, GroverLayer, RealAmplitudes,
)
from hlquantum.circuit import Circuit

# Wrap a hand-crafted circuit
init = CircuitLayer(Circuit(4).h(0).cx(0, 1))

# Stack layers into a pipeline
model = Sequential([
    init,
    QFTLayer(num_qubits=4),
    GroverLayer(num_qubits=4, target_states=["1010"]),
    RealAmplitudes(num_qubits=4, reps=2),
])

# Compile to a single circuit
circuit = model.build()
print(circuit)

Layers can also be composed with the | (pipe) operator:

from hlquantum.layers.core import CircuitLayer

a = CircuitLayer(Circuit(2).h(0))
b = CircuitLayer(Circuit(2).cx(0, 1))
combined = a | b          # returns a Sequential
circuit = combined.build()

Base & Container

hlquantum.layers.base ~~~~~~~~~~~~~~~~~~~~~

Base classes for quantum layers and modular circuit building.

Layer

Bases: ABC

Abstract base class for a quantum layer.

Source code in hlquantum/layers/base.py
16
17
18
19
20
21
22
23
24
25
26
27
28
class Layer(ABC):
    """Abstract base class for a quantum layer."""

    @abstractmethod
    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        """Build the module and return a QuantumCircuit."""
        pass

    def __or__(self, other: Layer) -> Sequential:
        """Pipe layers together."""
        if isinstance(other, Layer):
            return Sequential([self, other])
        return NotImplemented

__or__(other)

Pipe layers together.

Source code in hlquantum/layers/base.py
24
25
26
27
28
def __or__(self, other: Layer) -> Sequential:
    """Pipe layers together."""
    if isinstance(other, Layer):
        return Sequential([self, other])
    return NotImplemented

build(input_qubits=None) abstractmethod

Build the module and return a QuantumCircuit.

Source code in hlquantum/layers/base.py
19
20
21
22
@abstractmethod
def build(self, input_qubits: Optional[int] = None) -> Circuit:
    """Build the module and return a QuantumCircuit."""
    pass

Sequential

Bases: Layer

A container for a sequence of quantum layers.

Source code in hlquantum/layers/base.py
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
class Sequential(Layer):
    """A container for a sequence of quantum layers."""

    def __init__(self, layers: List[Layer]) -> None:
        self.layers = layers

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        if not self.layers:
            raise ValueError("Sequential model must have at least one layer.")

        # Determine number of qubits if not provided
        if input_qubits is None:
            # We'll just build them one by one and let them determine their size
            # or rely on the first layer's requirement.
            # For simplicity, we assume the user knows what they are doing
            # or we take the max num_qubits from all built circuits.
            pass

        full_circuit: Optional[Circuit] = None
        for layer in self.layers:
            circuit = layer.build(input_qubits)
            if full_circuit is None:
                full_circuit = circuit
            else:
                full_circuit = full_circuit | circuit

            # Update input_qubits for next layer to match current circuit size
            input_qubits = full_circuit.num_qubits

        return full_circuit

    def __or__(self, other: Layer) -> Sequential:
        if isinstance(other, Sequential):
            return Sequential(self.layers + other.layers)
        if isinstance(other, Layer):
            return Sequential(self.layers + [other])
        return NotImplemented

Core Layers

hlquantum.layers.core ~~~~~~~~~~~~~~~~~~~~~

Core layer implementations.

CircuitLayer

Bases: Layer

Wraps an existing QuantumCircuit as a layer.

Source code in hlquantum/layers/core.py
14
15
16
17
18
19
20
21
22
23
class CircuitLayer(Layer):
    """Wraps an existing QuantumCircuit as a layer."""

    def __init__(self, circuit: Circuit) -> None:
        self.circuit = circuit

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        # If input_qubits is more than circuit.num_qubits, we might want to expand?
        # For now, just return the circuit.
        return self.circuit

Functional Layers

hlquantum.layers.functional ~~~~~~~~~~~~~~~~~~~~~~~~~~~

Functional layers wrapping standard algorithms.

GroverLayer

Bases: Layer

A layer implementing Grover's search.

Source code in hlquantum/layers/functional.py
16
17
18
19
20
21
22
23
24
25
26
class GroverLayer(Layer):
    """A layer implementing Grover's search."""

    def __init__(self, num_qubits: int, target_states: List[str], iterations: Optional[int] = None) -> None:
        self.num_qubits = num_qubits
        self.target_states = target_states
        self.iterations = iterations

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        n = input_qubits if input_qubits is not None else self.num_qubits
        return quantum_search(n, self.target_states, self.iterations)

QFTLayer

Bases: Layer

A layer implementing Quantum Fourier Transform.

Source code in hlquantum/layers/functional.py
29
30
31
32
33
34
35
36
37
class QFTLayer(Layer):
    """A layer implementing Quantum Fourier Transform."""

    def __init__(self, num_qubits: int) -> None:
        self.num_qubits = num_qubits

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        n = input_qubits if input_qubits is not None else self.num_qubits
        return frequency_transform(n)

Variational Templates

hlquantum.layers.templates ~~~~~~~~~~~~~~~~~~~~~~~~~~

Predefined quantum templates and variational ansätze.

HardwareEfficientAnsatz

Bases: Layer

A general hardware-efficient ansatz with RX, RY, RZ and CX.

Source code in hlquantum/layers/templates.py
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
class HardwareEfficientAnsatz(Layer):
    """A general hardware-efficient ansatz with RX, RY, RZ and CX."""

    def __init__(self, num_qubits: int, reps: int = 1) -> None:
        self.num_qubits = num_qubits
        self.reps = reps

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        n = input_qubits if input_qubits is not None else self.num_qubits
        qc = Circuit(n)

        param_idx = 0
        for r in range(self.reps):
            for i in range(n):
                qc.rx(i, Parameter(f"theta_{param_idx}"))
                qc.ry(i, Parameter(f"theta_{param_idx + 1}"))
                param_idx += 2

            for i in range(n - 1):
                qc.cx(i, i + 1)

        return qc

RealAmplitudes

Bases: Layer

An ansatz consisting of single-qubit RY rotations and CX entanglers.

Source code in hlquantum/layers/templates.py
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class RealAmplitudes(Layer):
    """An ansatz consisting of single-qubit RY rotations and CX entanglers."""

    def __init__(self, num_qubits: int, reps: int = 1, entanglement: str = "full") -> None:
        self.num_qubits = num_qubits
        self.reps = reps
        self.entanglement = entanglement

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        n = input_qubits if input_qubits is not None else self.num_qubits
        qc = Circuit(n)

        param_idx = 0

        # Initial rotation layer
        for i in range(n):
            qc.ry(i, Parameter(f"theta_{param_idx}"))
            param_idx += 1

        for r in range(self.reps):
            # Entanglement layer
            if self.entanglement == "full":
                for i in range(n):
                    for j in range(i + 1, n):
                        qc.cx(i, j)
            elif self.entanglement == "linear":
                for i in range(n - 1):
                    qc.cx(i, i + 1)

            # Rotation layer
            for i in range(n):
                qc.ry(i, Parameter(f"theta_{param_idx}"))
                param_idx += 1

        return qc

Attention & Transformer Layers

hlquantum.layers.attention ~~~~~~~~~~~~~~~~~~~~~~~~~

Quantum Transformers and Attention Mechanisms.

QuantumMultiHeadAttention

Bases: Layer

A Quantum Multi-Head Attention layer.

This layer encodes classical data into quantum states and applies a parameterized circuit to compute attention-like features.

Source code in hlquantum/layers/attention.py
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
class QuantumMultiHeadAttention(Layer):
    """A Quantum Multi-Head Attention layer.

    This layer encodes classical data into quantum states and applies a 
    parameterized circuit to compute attention-like features.
    """

    def __init__(self, num_qubits: int, n_heads: int = 1):
        self.num_qubits = num_qubits
        self.n_heads = n_heads

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        n = input_qubits if input_qubits is not None else self.num_qubits
        qc = Circuit(n)

        # Parallel Attention Heads
        for head in range(self.n_heads):
            # Query/Key mapping (simplified as rotations)
            for i in range(n):
                qc.ry(i, Parameter(f"head_{head}_rot_{i}"))

            # Entangling layer (Attention mechanism)
            for i in range(n - 1):
                qc.cx(i, i + 1)

        return qc

QuantumTransformerBlock

Bases: Layer

A high-level block containing Quantum Attention and Feed-Forward layers.

Source code in hlquantum/layers/attention.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
class QuantumTransformerBlock(Layer):
    """A high-level block containing Quantum Attention and Feed-Forward layers."""

    def __init__(self, num_qubits: int):
        self.num_qubits = num_qubits

    def build(self, input_qubits: Optional[int] = None) -> Circuit:
        n = input_qubits if input_qubits is not None else self.num_qubits

        # Combine Attention and variational layers
        from hlquantum.layers.templates import RealAmplitudes

        qc = QuantumMultiHeadAttention(n).build()
        qc = qc | RealAmplitudes(n, reps=1).build()

        return qc