Brief intro into the project
The aim of this page is to present information concerning the 2-year (Apr.2022 - Mar. 2024) granted project,
entitled: "Deep Learning and its Applications in
Big Data Networks Performance Improvement", under the 3rd Call of
PhD Fellowships (Fellowship Number: 5631), supported by the Hellenic Foundation for Research and Innovation
(HFRI).
Fragkou Evangelia,
a member of the DANA Lab,
and under the supervision of associate professor
Dr.Dimitrios Katsaros,
has earned this grant in order to carry our her PhD studies. Specifically, in this page we present
the articles,
being accepted in journals and conferences and they are also found in various bibliographic
databases:
Google Scholar,
DBLP,
ACM,
Scopus.
Articles in journals
[ J2] Fragkou E., Koultouki M., Katsaros D., "Model Reduction of Feed Forward Neural
Networks for Resource-Constrained Devices",
Applied Intelligence (Springer), 2022, https://doi.org/10.1007/s10489-022-04195-8. [ link ]
[ J1] Fragkou E., Papakostas D., Kasidakis T., Katsaros D.,
"Multilayer Backbones for Internet of Battlefield Things",
Future Internet (MDPI), vol. 14, no. 6, art. 186, 2022.
[ link ]
Articles in conferences
[ C3] Fragkou E., Katsaros D., "Transfer Deep Learning for
TinyML", Poster, 5th Summit on Gender Equality in Computing, (GEC'23), Athens, June 27, 2023.
[ C2] Fragkou E., Lygnos V., Katsaros D., "Transfer Learning for Convolutional Neural Networks
in Tiny Deep Learning Environments", PCI2022, DOI: 10.1145/3575879.3575984. [ link ]
[ C1] Fragkou E., Katsaros D., "Memory Reduction and Training Acceleration of Neural Networks
for Tiny Machine Learning", Poster, 4th Summit on Gender Equality in Computing, (GEC'22),
Hybrid Event, June 16-17, 2022.
Main goals
- Neural model reduction.
The aim is to sparsify a deep neural network by
cutting of connections of neurons among the network that don't contribute
to the neural network training, during the training phase. In particular,
we employ specific rule-oriented concepts, developed in the realm of network science
(We base our motivation on observations in real neural networks whose
actual topology is scale-free (or smallworld)),in order to sparsify the neural network with the aim of keeping the
most significant linkages, which are responsible for better
information distribution in the network. Thus, we reduce, drastically both the number
of trainable variables and the size of the model, which leads to training acceleration
and therefore our algorithms are applicable to resource-constrained (TinyML) devices.
- Federated learning over ad hoc networks.
The aim is to drastically reduce communication in
purely distributed ad hoc network, by allowing only a percentage
of nodes (not every node), belonging to the network, to cooperatively
train a global model, in Federated Learning techniques. So, we focus on
finding smart node participation protocols (like following data similarity
criteria) in order to exclude specific nodes of the training procedure and
therefore reduce the data, needed to be transmitted among the nodes in the network.
- Deep learning in caching and in distributed indices.
Published papers in the literature focus on learned indexes and generalization
of B-trees (R -tree) concerning centralized data, so far.
It would be of great interest to study and develop deep
learning based techniques, with the aim of finding/matching distributed data.