(Translated by https://www.hiragana.jp/)
https://arxiv.org/abs/2401.15804 のHTMLバージョンです。
Googleではファイルを自動的じどうてきにHTMLに変換へんかんして保存ほぞんしています。
arXiv:2401.15804v1 [eess.IV] 28 Jan 2024
Page 1
Brain Tumor Diagnosis Using Quantum Convolutional Neural Networks
Muhammad Al-Zafar Khan ¶∥, Nouhaila Innan ‡∗, Abdullah Al Omar Galib§†, Mohamed Bennai ‡∗∗
Quantum United Arab Emirates (QUAE), UAE
Quantum Physics and Magnetism Team, LPMC, Faculty of Sciences Ben M’sick,
Hassan II University of Casablanca, Morocco
§
Independent Researcher
m.khan@quae.ae,
nouhaila.innan-etu@etu.univh2c.ma,
abdullahalomargalib@gmail.com,
∗∗
mohamed.bennai@univh2c.ma
Abstract—Integrating Quantum Convolutional Neural Net-
works (QCNNs) into medical diagnostics represents a trans-
formative advancement in the classification of brain tumors.
This research details a high-precision design and execution of
a QCNN model specifically tailored to identify and classify
brain cancer images. Our proposed QCNN architecture and
algorithm have achieved an exceptional classification accuracy
of 99.67%, demonstrating the model’s potential as a powerful
tool for clinical applications. The remarkable performance of our
model underscores its capability to facilitate rapid and reliable
brain tumor diagnoses, potentially streamlining the decision-
making process in treatment planning. These findings strongly
support the further investigation and application of quantum
computing and quantum machine learning methodologies in
medical imaging, suggesting a future where quantum-enhanced
diagnostics could significantly elevate the standard of patient care
and treatment outcomes.
Index Terms—Quantum Convolutional Neural Networks, Con-
volutional Neural Networks, Quantum Machine Learning, Quan-
tum Computing
I. INTRODUCTION
Brain tumors are neoplasms that represent abnormal cell
growth within the brain and its surrounding structure. Al-
though a relatively rare form of cancer – in comparison to
other cancers – analogous to other cancers, it can be benign
or malignant, and it is poorly diagnosed because neurological
symptoms cannot be detected easily, and specific tests like
biopsies and analysis of brain scans, needs to be conducted
by human experts. The process of diagnoses usually involves
three facets: Consideration of molecular features, analysis of
histological characteristics (microscopic analysis of cells and
tissues from the brain), and anatomical location (lobal regions
of the brain).
Holistically, these tumors are categorized as follows:
1) Meningiomas: Tumors originating from the meninges
surrounding the brain and the spinal cord.
2) Pituitary Adenomas: Tumors that originate in the pi-
tuitary gland at the base of the brain.
3) Gliomas: Tumors that originate from the glial cells
in the central nervous system. Gliomas are the most
common type of brain tumors that are developed, and
are therefore known as “primary brain tumors.”
4) Metastatic Lesions: Tumors that originate from cancer
cells in other parts of the body, and have spread via
metastatis to the brain. Therefore, they are known as
“secondary brain tumors”.
5) Medulloblastomas: Tumors that originate from the
cerebellum, and are most commonly found in children.
6) Acoustic Neuromas / Schwannomas: Tumors that orig-
inate from the Schwann cells insulting nerve fibers.
7) Pinealomas / Pineocytoma: Tumors that originate from
the pineal gland.
According to research published by the American Cancer
Society [1], in 2023 for the state of Ohio in the United States, it
was reported that 24 810 adults (with 14 280 being men, and
10 530 being women) in the United States were diagnosed
with a form of cancerous tumors of the brain and spinal
cord, and a 2020 survey found that globally 308 102 people
were diagnosed. Although a small percentage of the overall
population of the United States and the world, respectively,
these patients are eligible to receive the best care possible.
Usually, treatment paths are patient-specific, depending on
several factors, such as the patient’s current state of health, the
presence of other underlying diseases, and so on. As with other
cancers, treatment is very expensive, and requires the patient
to take significant steps to a complete lifestyle overhaul.
This multifarious approach encompasses, amongst others, a
change of diet, incorporation of exercise, radiation therapy,
immunotherapy, chemotherapy, and molecular therapy.
With the current Artificial Intelligence (AI) revolution in
all fields that the world is experiencing, the medical field is
no exception. Specifically, within the context of the analysis
of medical images for diagnosis and prognosis, Convolutional
Neural Networks (CNNs) are a type of NN architecture
designed for computer vision and image processing assign-
ments. Similar to how a perceptron is modeled after biological
neurons, the CNN is modeled after the cortical preprocessing
regions of the striate cortex (primary visual cortex – V1) and
the prostrate cortex (secondary visual cortex – V2), located
at the occipital lobe at the back of the brain. Analogous to
how computer vision tasks in the pre-Machine Learning (ML)
and pre-Deep Learning (DL) eras used to be concerned with
detecting edges, shapes, and textures, the neurons in this region
are sensitive to discerning these patterns. CNNs have proven
to be invaluable in the medical domain, and many healthcare
facilities are incorporating CNN-based classification systems
together with medical experts for early disease detection, and
recommended treatment plans [2]–[4].
Architecturally, the CNN is described as being composed
of two distinguishable layers:
1) Convolutional Layer: These contain adjustable fil-
arXiv:2401.15804v1 [eess.IV] 28 Jan 2024

Page 2
ters/kernels/weights optimized for the task’s superlative
performance. The output from this layer is known as
feature maps. Given a two-dimensional input image X,
the kernel K is applied in order to get the feature map
Y at each spatial location
Y (i, j) =
m
n
X(i + m, j + n) · K(m, n),
(1)
where i and m are the x-coordinates of the image
and kernel locations respectively, and j and n are the
y-coordinates image and kernel locations respectively.
Subsequently, a nonlinear activation function that serves
the network in order to introduce sparsity into the
network, mitigate small gradients so that the vanishing
gradients problem is avoided during backpropagation,
and speed up the convergence of the loss function.
Typically in a CNN, a ReLU(x) = max(0,x), or a
variant, is used.
2) Pooling Layer/Max Pooling/Subsampling: These re-
duce the spatial resolution of the feature maps by a
process called downsampling, which reduces the size of
the feature map, typically by half its original size. There
are several types of pooling operations, these include:
a) Max Pooling: Chooses the maximum value from
each subregion of the feature map.
Y (i, j) = max
m,n
X(i × s + m, j × s + n),
(2)
where s is the stride or number of pixel shifts over
the input matrix.
b) Average Pooling: Chooses the average value from
each subregion of the feature map.
Y (i, j) =
1
k
m
n
X(i × s + m, j × s + n). (3)
c) c2 Norm Pooling: Chooses the ℓ2 norm of each
subregion in the feature map.
Y (i, j) =
1
k
m
n
X2(i×s+m, j ×s+n). (4)
d) Global Pooling: Chooses the maximum or average
value across all the feature maps in a layer. This
produces a single scalar value for each feature map.
Subsequently, once a series of convolution and pooling
layers are applied (the number of each is an adjustable
hyperparameter decided to maximize accuracy, and minimize
time to convergence), the output is flattened, and fed to
the terminal layer, where by classification takes place. Gen-
erally, the sigmoid, σしぐま(x) = 1/ [1 + exp(−x)], function is
used for binary classification tasks, or the softmax, ˜σしぐま(x) =
exp(xi)/ ∑
|classes|
j=1
exp(xj), is used, where |classes| is the
number of classes.
Furthermore, the sub-operations within the network that take
place include:
Subsampling decreases the number of weights in the net-
work, thereby increasing the receptive field of the neurons in
order to capture more information about the input image on
a global scale, and improve the network’s ability to recognize
objects, no matter their location via translational invariance.
Convolution is the mathematical operation of sliding a filter
matrix, containing learnable weights that help infer different
features over the image and take the inner product of the
matrix with the pixel at each location to extract the feature.
The importance of convolution cannot be over-stressed as it is
fundamental for automatically discerning features, thus elimi-
nating the need for manual feature engineering. However, each
time a convolutional operator is applied, the dimensionality of
the image is reduced.
Padding is the process of adding pixels around the edge of
an image prior to applying the convolutional operator. This
process is vital because it ensures that the resulting feature
map has the same dimensions as the input image. In addition, if
images are not edge-pixelated, once the convolutional operator
is applied.
Batch normalization is a statistical technique used to nor-
malize the activations of the neurons in the CNN. This is
accomplished by adjusting and scaling the inputs to each layer.
However, classical CNNs have the drawback of being un-
able to learn global and remote semantic information well,
amongst others. Therefore, a natural consideration for ad-
vancement would be the incorporation of another thriving
technology; thus, our proposal of supplementation via Quan-
tum Machine Learning (QML). Below we summarize the
drawbacks of classical CNNs, and highlight how QCCNs may
conceivably address these challenges.
1) The Problem of Overfitting: CNNs are highly prone
to learning the probability distributions of the training
data “too well”, and therefore even learn erroneous noise
and outliers. Owing to larger feature spaces, QCNNs are
potentially less prone to overfitting.
2) Cost of Computation: Regular CNNs have quadratic
runtimes, O(Ni×N2
o ×S2
i ×S2
c ), where Ni is the number
of input feature maps, No is the number of output feature
maps, Si is the size of the input feature map, and Sc
is the size of the convolution filter. QCNNs have the
potential to be much lower due to the superposition of
qubits that create an exponential speedup.
3) Learning Mappings from Features to the Predictor:
As the image dataset gets more complicated, and more
expansive in size, CNNs limited in their expressivity.
QCNNs have the potential to be more expressive due
to exploiting the quantum mechanical properties of
superposition and entanglement.
Therefore, it is evident that considering QCNNs, and the
augmentation of classical CNNs with QML proves to be a
fruitful venture. Integrally, QML is a developing research
enterprise, that has myriad real-world applications in diverse
fields like drug discovery [5]–[9], materials science [10], [11],
condensed matter physics [12]–[15], optimization [16]–[20],
finance [21]–[25], logistics planning [26]–[29], cryptography

Page 3
Image
Convolution + ReLU
Kernel
Pooling
Convolution + ReLU
Pooling
Flatten
Layer
Fully
Connected
Layer
Softmax
Input:
Output:
Label 1
Label 2
Label 3
Label 4
Label
Feature Extraction
Classification
Probability
Distribution
Feature Maps
Fig. 1: Architecture of a classical CNN.
and cybersecurity [30], [31], and many other innovative appli-
cations [32]–[35].
The augmentation of QML with CNNs leads to Quantum
Convolutional Neural Networks (QCNNs). In this study, we
use QCNNs to perform brain tumor classifications on an open-
source brain tumor image dataset. While this is not novel, this
study offers several contemporary advancements, and superior
performance for in-distribution classification, as well as out-
of-distribution generalizability.
The original contributions of this paper are: The integration
of an extremely large image dataset, which is significant
considering the computational power available in the NISQ
era; the development and implementation of a QCNN with
a classical component that achieves high accuracy; and a
modular and scalable framework for the QCNN that ensures
adaptability to various computational contexts.
The structure of this paper is organized in the following
manner:
In Sec. II, we provide a literature review of the most relevant
papers in the field.
In Sec. III, we delve into the methodologies and underlying
theories of QCNNs, providing a detailed theoretical back-
ground.
In Sec. IV, we describe the process and implementation of
the QCNN used in this study, detailing the adaptations and
enhancements made to the original model.
In Sec. V, we present the results of our experiments.
In Sec. VI, we summarize our research findings and provide
a reflective analysis of the results obtained.
II. LITERATURE REVIEW
Below, we present salient studies that have motivated our
research undertaking, that have applied QML to brain tumor
classification tasks, and briefly describe the results obtained.
In [36], the authors developed a hybrid quantum-classical
CNN (HQC-CNN) model for brain tumor classification. The
novelties of the proposed network were that it achieved a
high classification score (97.8%), it was easy to train, and
converged to a solution very quickly. The classification prob-
lem was posed as a quaternary model, i.e. having four output
class types: Meningioma, glioma, pituitary, and no tumor.
In addition, it was demonstrated that the proposed network
outperformed many well-known models.
In [37], the authors use deep features extracted from the
famous Inception V3 model, combined with a parametric
quantum circuit on a predictor variable with four classes,
similar to the classification task in [36]. Using three datasets
as benchmarks (from Kaggle, the 2020-BRATS dataset, and
locally-collected dataset), it was shown that the hybrid ap-
proach proposed here should have superior performance com-
pared to traditional CNNs, with more than 90% accuracy.
The research in [38] sets out to address the ever-growing
size of image datasets in medical diagnoses, whilst maintaining
patient privacy, with a specific emphasis on brain tumor im-
ages. Further, a secure encryption-decryption framework is de-
signed to work on MRI data, and a 2-qubit tumor classification
model is implemented. The Dice Similarity Coefficient (DSC)
was used as a validation metric, and the model was found
to have a value of 98%. This research not only considered
designing a high-accuracy brain tumor classification model,
but also addressed the issue of cryptographic concerns.
In [39], the authors aim to address concerns around tu-
mor imaging and prediction using traditional ML techniques
amongst children and teenagers. Using India as a study group,
it is found that the data lacks variability, and does not account

Page 4
for abnormalities within these specific groups. To address
this, a QCNN model is proposed that integrates techniques
from image processing to mitigate noise. It is found that the
proposed quantum network achieves an 88.7% accuracy.
In [40], the authors use MRI-radiomatic data to implement a
brain tumor model using a Quantum Neural Network (QNN).
The workflow involved the usage of a mutual information
feature selection (MIFS) technique, and the problem was
converted into a combinatorial optimization task, which was
subsequently solved using a quantum annealer on a D-Wave
machine. While specifically not a CNN-based model, or aug-
mentation thereof, the technique proved versatile and achieved
on-par accuracies with classical methods.
In [41], the authors design an automatic MRI segmentation
model based on qutrits (quantum states of the form |ψぷさい⟩ =
αあるふぁ |0⟩+βべーた |1⟩+γがんま |2⟩ with |αあるふぁ|2+|βべーた|2+|γがんま|2 = 1), called quantum
fully self-supervised neural networks (QFS-Net) It is shown
that this approach increases segmentation accuracy (model
predictability) – outperforming classical networks – and con-
vergence. Besides using a qutrit framework for which the
model is based, the other novel contributions of this research
were the implementation of parametrized Hadamard gates,
the neighborhood-based topological interconnectivity amongst
network layers, and the usage of nonlinear transformations. It
was demonstrated that QFS-Net demonstrated favorable results
on the Berkeley gray-scale images.
Various other literature pieces are contained within the
pieces discussed above, and it will be an exercise in futility
simply to repeat the findings. However, what is evident is
that the hybrid approach, via augmentation of the QCNN
with salient features of the classical CNN, emphasized in
the literature, serves as motivation for adopting a hybridized
approach to the network in this paper.
III. METHODOLOGY
In this section, we present the mathematical formalism of
QCNNs.
Comparably similar to CNNs, QNNs use a network of
quantum convolution layers and quantum activation functions
to perform feature extraction from images. Operationally, as
presented in Fig. 2, we can describe the operation of a QCNN
as follows:
1) Data Encoding: Classical data Di ∈ D is converted to
a single-qubit quantum state |ψぷさいi⟩∈H,
Di
encoding
−→ |ψぷさいi⟩ = αあるふぁ |0⟩ + βべーた |1⟩ ,
(5)
where αあるふぁ, βべーた ∈ C are complex probability amplitudes.
2) Quantum Convolution: This is achieved by applying
a series of quantum gates to the encoded state |ψぷさい⟩. We
cumulatively represent these gates as U,
U|ψぷさい⟩ = U
= combination(H⊗n1 ,R
⊗n2
θしーた
,
X⊗n3 ,Y ⊗n4 ,Z⊗n5 ,...),
(6)
where the function combination represents taking
amalgamations of the single-qubit gates for ni, and
i = 1, 2,..., number of times.
3) Quantum Pooling: The operation is carried out by
performing the quantum SWAP test: Given two quantum
states |ψぷさいi⟩ , |ψぷさいj⟩ and an ancillary qubit |0⟩,
3.1. Apply the Hadamard gate to the ancillary qubit in
order to create a superposition
H |0⟩ =
1
2
(|0⟩ + |1⟩) = |+⟩ .
(7)
3.2. Apply the controlled-SWAP gate with |0⟩ as the
control, and |ψぷさいi⟩ and |ψぷさいj⟩ as the the targets,
CSWAP(|0⟩ ; |ψぷさいi⟩ , |ψぷさいj⟩) = |0⟩⟨0|⊗|ψぷさいi⟩⟨ψぷさいi|
+ |1⟩⟨1|⊗|ψぷさいj⟩⟨ψぷさいi| .
(8)
3.3 Apply the Hadamard transformation,
H |0⟩ = |+⟩ .
4) Fully-Connected Layer: This layer is composed of
neurons that are oriented in a feed-forward arrangement
whereby previous neurons are connected to all subse-
quent neurons.
5) Measurement: Perform measurement to ascertain the
end state.
In Algorithm 1, we provide a general procedure for applying
the QCNNs model to any dataset.
Algorithm 1 QCNN(D)
procedure QCNN(D)
Input: a dataset D = {D1,D2,...,Dp} consisting of
p n × m-dimensional images
Initialize the number of repeats q for the quantum
convolutional and quantum pooling layers
for each image Di in D do
Encode Di into quantum state |ψぷさいi
repeat
Apply U,
Measure
Apply quantum SWAP test
until q times
end for
Apply a fully connected NN to transform the output into
a 1D vector of class scores
Measure to obtain classifications
Return classified image with associated probability
end procedure
IV. PROCESS AND IMPLEMENTATION
In this section, we detail the practical steps taken to opera-
tionalize our QCNN model by discussing the workflow. We be-
gin by delineating the composition and sourcing of our dataset,
followed by the procedures employed in data preprocessing to
ensure optimal model input quality. Subsequently, we elucidate
our innovative approach to quantum image processing, which
sets the stage for the subsequent CNN training. This phase

Page 5
Data
Encoding
Quantum Convolution:
Quantum
Pooling
Measurement
Output
Fully-Connected Layer
Results
Fig. 2: Hypothetical architecture of QCNNs model.
is critical as it underpins the model’s ability to learn from
the quantum-enhanced feature space. Each step is crafted to
build incrementally towards a robust and practical QCNN
application.
A. Dataset
The dataset used in this research [42] comprises 3 064
T1-weighted contrast-enhanced images extracted from 233
patients, encompassing three distinct brain tumor types, as
shown in Fig. 3: meningioma (708 slices), glioma (1 426
slices), and pituitary tumor (930 slices). The data is organized
in MATLAB format (.mat files), each containing a structure
that encapsulates various fields detailing the image and tumor-
specific information. Each MATLAB (.mat) file in the dataset
encompasses the following key fields:
a. cjdata.label: An integer indicating the tumor type,
coded as follows:
1) Meningioma
2) Glioma
3) Pituitary Tumor
Fig. 3: Distribution of sample sizes across tumor categories in
the dataset.
b. cjdata.PID: Patient ID, serving as a unique identi-
fier for each patient.
c. cjdata.image: Image data representing the T1-
weighted contrast-enhanced brain scan.
d. cjdata.tumorBorder: A vector storing the coor-
dinates of discrete points delineating the tumor border.
The vector format is as follows:
[x1,y1,x2,y2,...]
These coordinate pairs represent planar positions
on the tumor border and were generated through
manual delineation, offering the potential to create
a binary image of the tumor mask.
e. cjdata.tumorMask: A binary image where pixels
with a value of 1 signify the tumor region. This mask is
a crucial resource for further segmentation and analysis.
This dataset provides a diverse set of brain scans, as shown
in Fig. 4, and includes manual annotations of tumor borders,
fostering the development and evaluation of robust image
processing and ML models for brain cancer classification [43],
[44].
B. Data Preprocessing
This step loads the MATLAB data files and extracts image
data, tumor labels, and relevant information, and then it
converts the T1-weighted images to a format suitable for
quantum circuit processing.
C. Quantum Image Processing
In this step, the QCNN circuit stands as a pivotal com-
ponent, and the circuit comprises several essential elements.
Initially, a quantum device with 4 qubits initializes using
PennyLane’s default qubit simulator [45], as presented in
Fig. 5. A significant step in this process is setting the parameter
θしーた to πぱい/2. This parameter finds its application in Controlled-
Rotation-Z (CRZ) and Controlled-Rotation-X (CRX) gates.
The circuit employs Rotation-X (RX) gates applied to each
qubit. These gates rotate the qubit around the X-axis of the
Bloch sphere. The rotation angle is proportional to the pixel
value times πぱい. This operation is mathematically expressed as
|ψぷさい⟩ =
3
i=0
RXiπぱい) |ψぷさい⟩ ,
(9)
with
RX iπぱい) =
(
cos ϕiπぱい/2
−ı sin ϕiπぱい/2
−ı sin ϕiπぱい/2)
cos ϕiπぱい/2
)
,
(10)

Page 6
Fig. 4: Diagnostic imaging examples: Brain tumor types from the dataset.
Fig. 5: QCNN circuit.
where |ψぷさい⟩ = |0⟩
⊗4
represents the initial state of the quantum
system, and ϕi are the elements of the pixel value array passed
to the quantum circuit.
Following this, the circuit incorporates controlled rotation
gates, specifically CRZ and CRX gates. These gates are
applied between each pair of adjacent qubits and between the
last and first qubits. These gates introduce interactions between
the qubits. Mathematically, this is represented as
|ψぷさい′′⟩ = (CRZ(θしーた)0,1 · CRX(θしーた)0,1)
× (CRZ(θしーた)1,2 · CRX(θしーた)1,2)
× (CRZ(θしーた)2,3 · CRX(θしーた)2,3) |ψぷさい
(11)
with
CRZ(θしーた) =
1 0
0
0
0 1
0
0
0 0 exp(−ıθしーた/2)
0
0 0
0
exp(ıθしーた/2)
,
(12)
and
CRX(θしーた) =
1 0
0
0
0 1
0
0
0 0
cos θしーた/2
−ı sin θしーた/2
0 0 −ı sin θしーた/2
cos θしーた/2
,
(13)
where CRZ(θしーた) and CRX(θしーた) are the phase-parameterized CRZ
and CRX gates, respectively. Moreover, the circuit incor-
porates CZ gates that are applied between each pair of
adjacent qubits and between the last and first qubits. we
add an additional layer of entanglement between the qubits,
Mathematically, this is represented as
|ψぷさい′′′⟩ = CZ0,1 · CZ1,2 · CZ2,3 · CZ3,0 |ψぷさい′′
(14)
with
CZ =
1 0 0
0
0 1 0
0
0 0 1
0
0 0 0 −1
,
(15)
Finally, the measurement phase occurs. In this phase, the
expectation value of the Pauli-Z operator is measured on the
first qubit. This step provides the average result of numerous
measurements, mathematically encapsulated as
M = ⟨ϕ′′′′′ |Z| ϕ′′′′′⟩ ,
(16)
where |ϕ′′′′′⟩ is the state after the CZ gates, indicating the
output of the quantum convolution operation on the 2 × 2
patch as a complex function of the pixel values, reflective of
the quantum nature of the operation.
The quantum convolution function plays a crucial role in
the process of quantum image processing. This function is
designed to perform a quantum convolution operation on an
input image. The function requires two arguments: Firstly, an
image represented as a 2D NumPy array, where each element
corresponds to a pixel value. Secondly, the step size, which

Page 7
divides the image into patches, is set to 2 by default, thus the
image is divided into 2 × 2 patches.
In terms of processing, the function begins by initializing
a new 2D array of zeros. The dimensions of this array are
determined by the height and width of the image divided
by the step size, and this array serves to store the output of
the quantum convolution operation. Subsequently, the function
divides the image into 2×2 patches by looping over the image
with the specified step size. For each 2×2 patch, the function
flattens the patch into a 1D array and then applies the quantum
convolution operation, as performed by the QCNN circuit.
The measurement result of this operation is then stored in
the corresponding position in the output array.
In the output layer, the function returns a new 2D array.
Each element in the array represents the result of the quantum
convolution operation on the corresponding 2 × 2 patch in
the original image. This resulting array can be perceived
as a transformed version of the original image, where the
transformation is a complex function of the pixel values
stemming from the quantum nature of the operation.
The core component of the implementation is a loop de-
signed for image processing, which iterates through every
file in a designated folder, provided that the current file has
been processed during each iteration. If it has not, it loads
the image and its associated label from the file. Following
this, it resizes and normalizes the image. Subsequently, a
quantum convolution function is applied to the image. Finally,
the processed image and its label are saved into a new file.
D. CNN Training
This approach involves training the CNN on quantum-
processed image data. The model architecture comprises vari-
ous layers, each with specific functions. The first is a Conv2D
layer with 32 filters, a 3×3 kernel size, and ReLU activation,
designed to detect low-level features like edges and corners in
the input image. This is followed by a MaxPooling2D layer
with a 2×2 pool size, which reduces the spatial dimensions of
the input by taking the maximum value in each 2×2 window,
aiding in translation invariance and reducing computational
complexity. Another Conv2D layer is then applied, with 64
filters, and once again with ReLU activation, to detect more
complex features from the previous layer’s output. A second
MaxPooling2D layer further reduces the input’s spatial
dimensions. The flatten layer then converts the 2D output into
a 1D array for processing in the dense layers. Subsequently,
a dense layer with 128 units and ReLU activation learns to
represent the input in a 128-dimensional space, detecting high-
level features. To prevent overfitting, a dropout layer with
a 0.5 dropout rate is included, randomly setting half of the
input units to 0 during training. Finally, another dense layer
with 4 units and softmax activation outputs the probabilities
of the input image belonging to each of the four classes.
This comprehensive architecture enables the classical CNN
to effectively learn from the quantum-processed data.
V. RESULTS AND DISCUSSION
This study thoroughly examined the application of a type
QCNN for classifying brain tumor images for the purpose of
having a more expressive model that can be used in diagnostic
medicine. The performance metrics from our findings highlight
the model’s exceptional accuracy and potential application for
commercialization and usage in a real-world setting.
The loss and accuracy were calculated for 20 epochs for
the training and validation data. As presented in Fig. 6a, the
progressive decline in loss indicates the model’s robust error
minimization capabilities. This steady reduction in training and
validation loss signifies that the QCNN model is effectively
learning from the training data while also generalizing well
to new, unseen data. Such a trend strongly indicates the
model’s ability to avoid overfitting, a common challenge in
deep learning models applied to medical image analysis.
Similarly, the consistent increase in accuracy shown in Fig.
6b confirms the model’s competence in correctly classifying
brain tumors. The high accuracy rate, especially with a peak
validation accuracy of 99.67% is particularly noteworthy given
the complexity and variability inherent in medical imaging
data. This suggests that the QCNN can capture intricate
patterns and features in the images, which are crucial for
accurate diagnosis. However, it is crucial to note that per-
formance may vary with changes in image complexity and
dataset diversity. The confusion matrix, shown in Fig. 7,
offers additional insights into the model’s performance, par-
ticularly in its class-wise accuracy. The perfect classification
of “pituitary tumor” cases is a testament to the model’s
precision. However, the challenges observed in differentiating
between the meningioma and glioma classes. This highlights
areas for further refinement. These misclassifications may stem
from the inherent similarities in the imaging characteristics
of these tumor types, suggesting a need for more nuanced
feature extraction techniques or additional training data to
enhance differentiation. The practical utility of the QCNN
model is further validated through the real-world test cases
showcased in Fig. 8, which displays 10 randomly selected
test images, each annotated with its actual label and the
model’s prediction. The accurate predictions in these cases
demonstrate the model’s readiness for deployment in clinical
settings, where it can aid in the rapid and reliable diagnosis
of brain tumors.
In summary, the QCNN model’s outstanding performance in
this study demonstrates its capability as a powerful analytical
tool in medical imaging and highlights its transformative
potential in clinical diagnostics. The model’s high accuracy,
achieved in the challenging task of brain tumor image clas-
sification, underscores its precision and reliability. Moreover,
the robustness of the QCNN in processing and interpreting
complex image data attests to its sophisticated design and
adaptability. This robustness is particularly crucial in medical
imaging, where image variability and accuracy are paramount.
The QCNN model’s ability to consistently improve over
training epochs, as evidenced by decreasing loss and in-

Page 8
(a)
(b)
Fig. 6: Training and validation performance plots: (a) Loss and (b) Accuracy.
Fig. 7: Confusion matrix displaying class-wise performance.
creasing accuracy, indicates a strong learning capability and
a practical approach to generalizing from training data to
unseen data. This aspect is vital for clinical applications where
models must perform reliably on diverse and novel datasets.
Furthermore, despite some challenges, the model’s nuanced
handling of different tumor types opens avenues for further
optimization and rarefaction. These qualities collectively posi-
tion the QCNN model as a promising candidate for real-world
clinical applications, potentially revolutionizing how medical
imaging is analyzed and interpreted.
VI. CONCLUSION
This paper presents a comprehensive exploration of QCNNs
in the context of medical imaging, particularly for brain
tumor classification. A theoretical overview of QCNNs and
conventional CNNs was provided, followed by a detailed
literature review of various approaches in this domain. This
theoretical groundwork laid the foundation for our practical
implementation, which introduced innovative modifications
while adhering to general QCNN principles.
Our implementation strategy involved integrating a conven-
tional CNN within the QCNN framework, essentially creating
a hybrid model that leverages both technologies’ strengths.
This approach deviates from standard QCNN designs and
represents an experimental foray into uncharted territories of
neural network architectures.
Additionally, using a quantum simulator to execute the
model and generate results is a significant step in practical
QML applications. The outcomes achieved in this study high-
light that a quantum approach provides uplift over classical
approaches as evidenced by the accuracy rates in classifying
complex medical images. This provides a compelling case for
the QCNN model’s efficacy.
The findings of this research contribute to the growing
body of knowledge on the applicability of QML in solving
real-world problems, particularly in medical diagnostics. In
addition, this opens up new possibilities for future research,
including the potential to explore other complex tasks.
In conclusion, this study advances our understanding of
QCNNs and demonstrates their practical application in a
critical field. The promising results pave the way for further
exploration and development of QML solutions in medical
imaging, potentially significantly enhancing diagnostic accu-
racy and patient outcomes.
DECLARATIONS
Conflicts of Interest
The authors declare no competing interests.
Authors’ contributions
All authors have contributed equally.

Page 9
Fig. 8: Sample predictions with true and predicted labels.
Availability of Data and Materials
The datasets and numerical details necessary to replicate
this work are available from the corresponding author upon
reasonable request.
REFERENCES
1 Siegel, R. L., Giaquinto, A. N., & Jemal, A. (2024). “Cancer statistics,
2024”. CA: A Cancer Journal for Clinicians, https://doi.org/10.3322/caac.
21820.
2 Ullah, U., & Garcia-Zapirain, B. (2024). “Quantum Machine Learning
Revolution in Healthcare: A Systematic Review of Emerging Perspectives
and Applications”. IEEE Access, https://doi.org/10.1109/ACCESS.2024.
3353461.
3 Vatandoost, M., & Litkouhi, S. (2019). “The future of healthcare facilities:
how technology and medical advances may shape hospitals of the future”.
Hospital Practices and Research, 4(1), https://doi.org/10.15171/hpr.2019.
01.
4 Menegatti, D., et al. (2023). “CADUCEO: A Platform to Support Federated
Healthcare Facilities through Artificial Intelligence”. Healthcare, 11(15),
https://doi.org/10.3390/healthcare11152199.
5 Cao, Y., Romero, J., & Aspuru-Guzik, A. (2018). “Potential of quantum
computing for drug discovery”. IBM Journal of Research and Development,
62(6), https://doi.org/10.1147/JRD.2018.2888987.
6 Blunt, N. S., et al. (2022). “Perspective on the current state-of-the-art of
quantum computing for drug discovery applications”. Journal of Chemical
Theory and Computation, 18(12), https://doi.org/10.1021/acs.jctc.2c00574.
7 Zinner, M., Dahlhausen, F., Boehme, P., Ehlers, J., Bieske, L., & Fehring,
L. (2021). “Quantum computing’s potential for drug discovery: Early stage
industry dynamics.” Drug Discovery Today, 26(7), https://doi.org/10.1016/
j.drudis.2021.06.003.
8 Wang, P. H., Chen, J. H., Yang, Y. Y., Lee, C., & Tseng, Y. J. (2023). “Re-
cent Advances in Quantum Computing for Drug Discovery and Develop-
ment”, IEEE Nanotechnology Magazine, https://doi.org/10.1109/MNANO.
2023.3249499.
9 Batra, K., Zorn, K. M., Foil, D. H., Minerali, E., Gawriljuk, V. O., Lane,
T. R., & Ekins, S. (2021). “Quantum machine learning algorithms for drug
discovery applications”. Journal of Chemical Information and Modeling,
61(6), https://doi.org/10.1021/acs.jcim.1c00166.
10 Bauer, B., Bravyi, S., Motta, M., & Chan, G. K. L. (2020). “Quantum al-
gorithms for quantum chemistry and quantum materials science”. Chemical
Reviews, 120(22), https://doi.org/10.1021/acs.chemrev.9b00829.
11 Liu, H., Low, G. H., Steiger, D. S., Häner, T., Reiher, M., & Troyer,
M. (2022). “Prospects of quantum computing for molecular sciences”.
Materials Theory, 6(1), https://doi.org/10.1186/s41313-021-00039-z.
12 Smith, A., Kim, M. S., Pollmann, F., & Knolle, J. (2019). “Simulating
quantum many-body dynamics on a current digital quantum computer”. npj
Quantum Information, 5(1), https://doi.org/10.1038/s41534-019-0217-0.
13 Micheletti, C., Hauke, P., & Faccioli, P. (2021). “Polymer physics by
quantum computing”. Physical Review Letters, 127(8), https://link.aps.org/
doi/10.1103/PhysRevLett.127.080501.
14 Innan, N., Khan, M. A. Z., & Bennai, M. (2024). “Quantum computing for
electronic structure analysis: Ground state energy and molecular properties
calculations”. Materials Today Communications, 38(107760), https://doi.
org/10.1016/j.mtcomm.2023.107760.
15 Vorwerk, C., Sheng, N., Govoni, M., Huang, B., & Galli, G. (2022).
“Quantum embedding theories to simulate condensed systems on quantum
computers”. Nature Computational Science, 2(7), https://doi.org/10.1038/
s43588-022-00279-0.
16 Ajagekar, A., & You, F. (2019). “Quantum computing for energy systems
optimization: Challenges and opportunities”. Energy, 179, https://doi.org/
10.1016/j.energy.2019.04.186.
17 Ajagekar, A., Humble, T., & You, F. (2020). “Quantum computing based
hybrid solution strategies for large-scale discrete-continuous optimization
problems.” Computers & Chemical Engineering, 132, https://doi.org/10.
1016/j.compchemeng.2019.106630.
18 Wang, L., Tang, F.,& Wu, H. (2005). “Hybrid genetic algorithm based on
quantum computing for numerical optimization and parameter estimation”.
Applied Mathematics and Computation, 171(2), https://doi.org/10.1016/j.
amc.2005.01.115.
19 Shukla, A., & Vedula, P. (2019). “Trajectory optimization using quantum
computing”. Journal of Global Optimization, 75, https://doi.org/10.1007/
s10898-019-00754-5.
20 Li, Y., Tian, M., Liu, G., Peng, C., & Jiao, L. (2020). “Quantum
optimization and quantum learning: A survey”. Ieee Access, 8, https:
//doi.org/10.1109/ACCE.
21 Orús, R., Mugel, S., & Lizaso, E. (2019). “Quantum computing for
finance: Overview and prospects”. Reviews in Physics, 4, https://doi.org/
10.1016/j.revip.2019.100028.
22 Innan, N., Khan, M. A. Z., & Bennai, M. (2023). “Financial fraud
detection: a comparative study of quantum machine learning models”.
International Journal of Quantum Information, https://doi.org/10.1142/
S0219749923500442.
23 Emmanoulopoulos, D., & Dimoska, S. (2022). “Quantum machine learn-
ing in finance: Time series forecasting.” arXiv preprint arXiv:2202.00599.
24 Herman, D., et al. (2022). “A survey of quantum computing for finance”.
arXiv preprint arXiv:2201.02773.
25 Innan, N. et al. (2023). “Financial fraud detection using quantum graph
neural networks”. arXiv preprint arXiv:2309.01127.
26 Correll, R., Weinberg, S. J., Sanches, F., Ide, T., & Suzuki, T. (2023).
“Quantum Neural Networks for a Supply Chain Logistics Applica-
tion”. Advanced Quantum Technologies, 6(7), https://doi.org/10.1002/qute.
202200183.
27 Weinberg, S. J., Sanches, F., Ide, T., Kamiya, K., & Correll, R. (2023).
“Supply chain logistics with quantum and classical annealing algorithms”.
Scientific Reports, 13(1), https://doi.org/10.1038/s41598-023-31765-8.
28 Azzaoui, A. E., Kim, T. W., Pan, Y., & Park, J. H. (2021). “A quantum
approximate optimization algorithm based on blockchain heuristic approach
for scalable and secure smart logistics systems”. Human-centric Computing
and Information Sciences, 11(46).
29 Gachnang, P., Ehrenthal, J., Hanne, T., & Dornberger, R. (2022). “Quan-
tum Computing in Supply Chain Management State of the Art and Research
Directions”. Asian Journal of Logistics Management, 1(1), https://doi.org/
10.14710/ajlm.2022.14325.

Page 10
30 Dixit, V., et al. (2021). “Training a quantum annealing based restricted
boltzmann machine on cybersecurity data”. IEEE Transactions on Emerg-
ing Topics in Computational Intelligence, 6(3), https://doi.org/10.1109/
TETCI.2021.3074916.
31 Suryotrisongko, H., & Musashi, Y. (2022). “Evaluating hybrid quantum-
classical deep learning for cybersecurity botnet DGA detection”. Procedia
Computer Science, 197, https://doi.org/10.1016/j.procs.2021.12.135.
32 Innan, N., et al.(2023). “Quantum State Tomography using Quantum
Machine Learning”. arXiv preprint, arXiv:2308.10327.
33 Innan, N., & Khan, M. A. Z. (2023). “Classical-to-Quantum Sequence
Encoding in Genomics”. arXiv preprint, arXiv:2304.10786.
34 Innan, N., & Bennai, M. (2023). “Simulation of a Variational Quantum
Perceptron using Grover’s Algorithm”. arXiv preprint, arXiv:2305.11040.
35 Innan, N., Khan M. A. Z., & Bennai, M. (2023). “Enhancing quantum sup-
port vector machines through variational kernel training”. Quantum Infor-
mation Processing, 22(374), https://doi.org/10.1007/s11128-023-04138-3.
36 Dong, Y., Fu, Y., Liu, H., Che, X., Sun, L., & Luo, Y. (2023). “An
improved hybrid quantum-classical convolutional neural network for multi-
class brain tumor MRI classification”. Journal of Applied Physics, 133(6),
https://doi.org/10.1063/5.0138021.
37 Amin, J., Anjum, M. A., Sharif, M., Jabeen, S., Kadry, S., & Moreno Ger,
P. (2022). “A new model for brain tumor detection using ensemble transfer
learning and quantum variational classifier”. Computational Intelligence
and Neuroscience, https://doi.org/10.1155/2022/3236305.
38 Amin, J., Anjum, M. A., Gul, N., & Sharif, M. (2022). “A secure two-qubit
quantum model for segmentation and classification of brain tumor using
MRI images based on blockchain.” Neural Computing and Applications,
34(20), https://doi.org/10.1007/s00521-022-07388-x.
39 Chandra, S., Saxena, S., Kumar, S., Chaube, M. K., & Bodhey, N. K.
(2022, December). “A Novel Framework For Brain Disease Classification
Using Quantum Convolutional Neural Network.” 2022 IEEE International
Women in Engineering (WIE) Conference on Electrical and Computer En-
gineering (WIECON-ECE), https://doi.org/10.1109/WIECON-ECE57977.
2022.10150851.
40 Felefly, T. et al. (2023). “An Explainable MRI-Radiomic Quantum Neural
Network to Differentiate Between Large Brain Metastases and High-Grade
Glioma Using Quantum Annealing for Feature Selection.” Journal of
Digital Imaging, 36(6), https://doi.org/10.1007/s10278-023-00886-x.
41 Konar, D., Bhattacharyya, S., Panigrahi, B. K., & Behrman, E. C.
(2022). “Qutrit-Inspired Fully Self-Supervised Shallow Quantum Learning
Network for Brain Tumor Segmentation.” IEEE Transactions on Neural
Networks and Learning Systems, 33(11), https://doi.org/10.1109/TNNLS.
2021.3077188.
42 Cheng, Jun. “Brain Tumor Dataset”. Figshare. Dataset. (2017), https://
doi.org/10.6084/m9.figshare.1512427.v5.
43 Cheng, Jun, et al. “Enhanced Performance of Brain Tumor Classification
via Tumor Region Augmentation and Partition”. PLOS One. pp. 10.10
(2015), https://doi.org/10.1371/journal.pone.0140381.
44 Cheng, Jun, et al. “Retrieval of Brain Tumors by Adaptive Spatial Pooling
and Fisher Vector Representation.” PLOS One. pp. 11.6 (2016), https://doi.
org/10.1371/journal.pone.0157112.
45 Bergholm, Ville, et al. “Pennylane: Automatic differentiation of hybrid
quantum-classical computations.” arXiv, (2018), https://arxiv.org/abs/1811.
04968.