As a member of the Artificial Intelligence research group at the University of Essex, I organise a Computer Vision seminar series throughout the semester. We welcome all the researchers (PhD students, professors, researchers in industry labs, etc) to visit our group and present their work at the seminar.

This page is used to maintain information about past and coming talks on the biweekly seminar, with links to relevant presentations, papers and other resources.

If you want to be included in the Computer Vision mailing list, please contact me.

14 June 2023

Slides: Coming soon

Title: Advanced Computer Vision Techniques for Healthcare Applications

Abstract: The early detection of a disease progression is a critical task that can equip people with the advantage of early knowledge and intervention. It helps people to enhance their health status and possibly prevent the long-term complications, even death. Also, such an accurate detection of the disease can significantly reduce national healthcare expenditure, particularly in the area of skin cancer and its complications where the waiting time to see a doctor in the UK is very long. Advancements in computer vision facilitate us to apply such techniques for different healthcare applications. In this talk, I will present some of my works where I applied computer vision techniques, more specifically machine learning and deep learning models for healthcare applications.

Bio: Dr Shafiqul Islam is currently working as a knowledge transfer partnership (KTP) associate for the Skin AI Model project through a collaboration between Check4Cancer Ltd and the University of Essex, UK. He was a postdoctoral research associate at the University of Glasgow, UK. His project was about developing a deep learning framework to prevent thermal imaging attacks on user interfaces. He did his PhD in Computer Science and Engineering at Hamad Bin Khalifa University, Qatar. His PhD dissertation was about the application of advanced machine learning techniques for diabetes management and prediction. He worked on artifact removal from EEG signals for ambulatory epileptic seizure prediction applications during his MSc studies at the American University of Beirut, Lebanon. His works have been published in IEEE Sensor and biomedical signal processing journals. The majority of his work involves collecting and processing data (EEG, CGM, EHR, and images), pre-processing, feature engineering, feature selection, and fusion, developing and optimizing machine and deep learning models.

31 May 2023

Slides: Coming soon

TitleData-Driven AI-based Digital Reconstruction of Human Tissues for Synthetic Biology and Computational Pharmaceutics

AbstractIn this talk we will explore the fundamental steps to provide realistic digital representation of human tissues. We will show how structure and activity of human tissues can be broken into separate machine learning problems, where we show methods on how to extract computational solutions from experimental data. We show results in addressing problems such as the estimation of health and disease states in tissue, computational-based design of drug delivery systems and novel biocomputing solutions. We present a few use cases in the areas of Synthetic Biology and Computational Pharmaceutics.

Bio: Dr Barros is an Assistant Professor (Lecturer) since June 2020 in the School of Computer Science and Electronic Engineering at the University of Essex, UK. He received the PhD in Computer Science at the South East Technological University, Ireland, in 2016. He previously held multiple academic positions with prestigious grants in the Tampere University, Finland, (MSCA-IF) and Waterford Institute of Technology, Ireland (IRC GOI Postoc). He is the head of the Unconventional Communications and Computing Laboratory, which is part of the Communications and Networks Research Group.

He has over 80 research peer-reviewed scientific publications in top journals and conferences such as Nature Scientific Reports, IEEE Transactions on Communications, IEEE Transactions on Vehicular Technology, in the areas of molecular and unconventional communications, biomedical engineering, bionano science and Beyond 5G. Since 2020, he is a review editor for the Frontiers in Communications and Networks journal in the area of unconventional communications. He also served as guest editor for the IEEE Transactions on Molecular, Biological and Multi-Scale Communications and Digital Communications Networks journals. He received the CONNECT Prof. Tom Brazil Excellence in Research Award in 2020.

10 May 2023

Slides: Coming soon

Title: Topics in Contextualised Attention Embeddings
Abstract: Contextualised word vectors obtained via pre-trained language models encode a variety of knowledge that has already been exploited in applications. Complementary to these language models are probabilistic topic models that learn thematic patterns from the text. Recent work has demonstrated that conducting clustering on the word-level contextual representations from a language model emulates word clusters that are discovered in latent topics of words from Latent Dirichlet Allocation. The important question is how such topical word clusters are
automatically formed, through clustering, in the language model when it has not been explicitly designed to model latent topics. To address this question, we design different probe experiments. Using BERT and DistilBERT, we find that the attention framework plays a key role inmodelling such word topic clusters. We strongly believe that our work paves way for further research into the relationships between probabilistic topic models and pre-trained language models

 

Bio: Mozhgan Talebpour is Ph.D. student in Computer Science and Artificial Intelligence, School of Computer Science and Electronic Engineering, University of Essex. Her main experience involves effectively and efficiently processing various multi-modalities present in social media which mainly includes text, images, and videos as input and categorizing them into predefined groups. It's noticeable that these textual input data could be in different languages. She has been using a wide range of methods and algorithms namely classification methods (SVM, decision tree, random forest, Naïve Bayes, etc.), regression methods (linear and nonlinear), clustering methods (covering categorical and numerical data), and neural network algorithms (RNN, CNN, Bi-LSTM) to extract information from large datasets and to produce AI products. She has also found Skip-Gram, GloVe, BERT, GPT, ELMo to be useful in the projects she has conducted so far. 

26 April 2023


Title:Combining Visual Place Recognition Models to Compensate Standalone Shortcomings by Bruno Arcanjo

AbstractVisual place recognition (VPR) refers to the ability of a system to recognise a previously seen place by using image information and is heavily influenced both by the computer vision and robotic research communities. VPR presents an array of different challenges in the visual and hardware departments, often with a trade-off between performance and efficiency. Thus, many VPR techniques have been proposed, each with their unique sets of strengths and weaknesses, and the combination of techniques followed suit to exploit this observation.

 

In this talk, Bruno Arcanjo will cover our recent innovate work in combining VPR techniques with a focus on computational efficiency, while preserving performance benefits. Firstly,  he explains how he used several, lightweight and bio-inspired models to achieve respectable performance at extremely low computational efficiency. Then, he presents his work in combining existing VPR techniques in an environment adaptive fashion, avoiding unnecessary computation.


BioBruno Arcanjo is a Computer Science PhD student at the University of Essex under the supervision of Dr. Shoaib Ehsan and Prof. Klaus McDonald-Maier. His research topic is Visual Place Recognition, with a special interest on bio-inspired, efficient algorithms as well as technique fusion approaches. Other interests include deep-learning vision models and generative AI.

07 December 2022


TitleRobust Perceptual Grouping Using Tensor Voting: From Numerical Approximation to Analytical Solution by Hongbin Lin

AbstractTensor voting has been has become one of the most popular perceptual grouping techniques due to its powerful saliency structure inference capability and successfully adapted to problems well beyond the ones to which it was originally applied with excellent results. Despite its effectiveness, tensor voting cannot be used in applications where efficiency is an issue. This is mainly due to the high computational cost of its classical implementation, especially regarding the tensor voting in higher dimensional spaces. Besides, the absence of the analytical solution to tensor voting has been restricting the investigation into the potential properties of the mechanism, preventing its further integration with other techniques, e.g. learning based methods. This topic aims to discuss a novel Analytical Tensor Voting (ATV) mechanism, which enables robust perceptual grouping and salient information extraction from noisy N−dimensional (ND) data.

BioDr Hongbin Lin is currently an associate professor at the School of Electrical Engineering, Yanshan University, China. He is also a Research Fellow working with Professor Dongbin Gu, at the School of Computer Science and Electronics Engineering, University of Essex. He has a Ph.D. in Electronics Science and Technology from Yanshan University since 2012. He majored in Measurement Technology and Automation Devices and holds an M.Sc degree from Yanshan University, China. His Ph.D. focused on the meshless processing of point clouds data and its application in feature extraction and parameter estimation of large forging workpieces. In recent years, Hongbin has published more than 10 journal paper in the fields of point cloud processing, scene understanding and manifold learning on IEEE Transactions on Pattern Analysis and Machine Intelligence, IEICE Transactions on Information and Systems, Acta Automatica Sinca, etc. He also hosted a series of funding projects, including 1 National Natural Science Foundation of China Project, 3 Natural Science Foundation of Hebei Province Projects and 1 Key Foundation of Hebei Educational Committee Project.

23 November 2022

Slides: Coming soon

TitleDeep Learning for Automated Neurodegenerative Disease Diagnosis by Ekin Yagis


AbstractAutomated disease classification systems can assist radiologists by reducing workload while initiating therapy to slow disease progression and improve patients’ quality of life. With significant advances in machine learning (ML) and medical scanning over the last decade, medical image analysis has experienced a paradigm change. Deep learning (DL) employing magnetic resonance imaging (MRI) has become a prominent method for computer-assisted systems because of its ability to extract high-level features via local connection, weight sharing, and spatial invariance. Nonetheless, there are several important research challenges when advancing toward clinical application, and these problems inspire the contributions presented throughout this thesis. This research develops a framework for the classification of neurodegenerative diseases using DL techniques and MRI.

Bio: Dr Ekin is a Post-Doctoral Research Fellow working at the UCL Centre for Advanced Biomedical Imaging and UCL Department of Mechanical Engineering. She has a Ph.D. in Computer Science and Electrical Engineering from the University of Essex. She majored in Electrical Engineering at Koc University, Turkey, and holds an M.Sc degree from Sabanci University, Turkey. Her Ph.D. focused on the early prediction of neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases using machine learning.

In 2022 Ekin joined the interdisciplinary team headed by Professor Peter Lee, which utilised the European Synchrotron Radiation Facility in Grenoble to image organs.  She is currently working on the development of new machine learning-based image processing pipelines to gather information that uncovers the mechanisms of neurological diseases.


9 November 2022


TitleDevelopment in Audio Forensics and Current Challenges by Zulfiqar Ali 


AbstractWith the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. Copy-move and splicing forgery can be used to tamper audio. Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in audio. On the other hand, the splicing of forged audio may involve merging recordings of different devices, speakers, and environments. Dr. Zulfiqar Ali will provide a comprehensive overview of audio forgery including copy-move and splicing. He will also discuss the intelligent systems to detect such types of tampering and their limitations. Some related articles are:

 

BioDr Zulfiqar Ali is a lecturer at the School of Computer Science and Electronic Engineering (CSEE), University of Essex.  His main focus is on AI for decision-making and the development of speech-related applications. He was part of BT Ireland Innovation Centre during 2018-2020 and supervised various projects related to predictive analytics. During 2010-2018, he was also a member of the Digital Speech Processing Group at King Saud University and the Centre of Intelligent Signal and Imaging Research at Universiti Teknologi PETRONAS. He played a vital role in the design and development of various speech databases and some of them are available through the Linguistic Data Consortium (LDC). Currently, he is establishing a Digital Speech Processing Labatoratoy in CSEE with the facility of a multichannel recording system. He has published extensively in the domain of speech/speaker recognition, vocal fold disorders detection, security and privacy in healthcare applications, and multimedia forensics.


25 May 2022


TitleLatent Topics in Contextualised Attention Embeddings by Mozhgan Talebpour


Abstract: Word vectors obtained via pre-trained language models encode a variety of knowledge that has already been exploited in the research literature. Complementary to these language models are probabilistic topic models that learn thematic patterns from the text in low-dimensional representation space. Recent work has demonstrated that conducting clustering on the word-level representations obtained from a language model emulates what is usually discovered in latent topics of words, i.e., word topics obtained after factoring the co-occurrence matrix using a topic model such as Latent Dirichlet Allocation (LDA). An interesting question is how such topical knowledge is automatically encoded by the language model when it has not been explicitly designed to model latent topics? To address this question, we design different probe experiments. Using BERT and DistilBERT, which are popular pre-trained language models, we find that the attention framework plays a key role in modelling clusters of words that seem to resemble latent topics. We strongly believe that our work paves way for further research into the relationships between probabilistic topic models and pre-trained language models including developing a thorough theoretical understanding of their relationship.

 

Bio: Mozhgan Talebpour is a KTP associate working on text classification. She is currently Ph.D. student in Computer Science and Artificial Intelligence, School of Computer Science and Electronic Engineering, University of Essex. Her main experience involves effectively and efficiently processing various multi-modalities present in social media which mainly includes text, images, and videos as input and categorizing them into predefined groups. It's noticeable that these textual input data could be in different languages. She has been using a wide range of methods and algorithms namely classification methods (SVM, decision tree, random forest, Naïve Bayes, etc.), regression methods (linear and nonlinear), clustering methods (covering categorical and numerical data), and neural network algorithms (RNN, CNN, Bi-LSTM) to extract information from large datasets and to produce AI products. She has also found Skip-Gram, GloVe, BERT, GPT, ELMo to be useful in the projects she has conducted so far. 


11 May 2022


Title: Use of Generative Adversarial Networks for the creation and manipulation of facial images in the context of studying false memories and its effects on wrongful conviction case by Rodrigo Ramele

Abstract: This project is being developed in the context of the study of the false memories' formation in human beings, and their implications in eyewitness reporting in court trials and how that can lead to errors in convictions.  It is sponsored by the ONG Innocent Project, which precisely works to revert those invalid sentences particularly when there is external evidence that the eyewitness procedure could have been actively manipulated. In particular, we implement a face generation model using a Generative Adversarial Network (GAN), with the aim of generating faces as realistic as possible, so that a human cannot distinguish them from real faces. We described how StyleGAN, a particular implementation of the GAN network, was the chosen architecture, because in addition to producing images with high resolution quality, it presents a model that allows navigation of the latent space and the synthesis of faces, using style mixing properties.

Bio: Dr Rodrigo Ramele is Senior Research Officer at the Brain-Computer Interfaces and Neural Engineering (BCI-NE) lab at the University of Essex, working on a US-UK Bilateral Academic Research Initiative (BARI) project led by Prof. Riccardo Poli, looking at human-AI collaborative decision-making. Previously, he worked on Computer Vision, Artificial Intelligence, Assistive Robotics and BCI in Argentina and in Japan.



30 March 2022

Title: Deep Learning in Target Space by Michael Fairbank 


Abstract:  Dr  Michael Fairbank  will give a brief, high-level explanation of the main concept of our new paper "Deep Learning in Target Space", developed at Essex University by Dr Michael 

Fairbank, Dr Spyros Samothrakis and Prof Luca Citi, published at JMLR 2022,  https://www.jmlr.org/papers/v23/20-040.html . The idea of this method is to train neural networks by doing gradient descent with respect to the values of the activations of the hidden nodes in the network, as opposed to the usual gradient descent with respect to the weights of the network.  Hence this is a transformation of the usual search space used in deep learning, from "weight space" to "target space".  They argue that this method leads to an effect which we call cascade untangling, which stabilises the learning, enabling the training of deeper neural networks, improving generalisation, and solving the notorious "exploding gradients" problem.  It is particularly effective in the "deepest" kind of neural networks, i.e. recurrent neural networks.

Bio: Dr Michael Fairbank is a Computer Science lecturer at the University of Essex. His main research area is developing and applying learning algorithms for neural networks and reinforcement learning.  In his previous careers he worked as a computer consultant and as a mathematics teacher. He has a passion for all things related to computing, mathematics and AI.


16 March 2022

Slides: Click here to download presentation


Title: Modelling Group Dynamics with SYMLOG and Snowdrift for Intelligent Classroom Environment by Dr Edward Longford

 

Abstract: The research conducted during my thesis was to provide assistance to human teachers focusing on supporting group work within a classroom environment. This is achieved by incorporating theories from Psychology and Game Theory in order to provide a better method of modelling and predicting group interactions. This research proposes a framework that extends the pre-existing Intelligent Tutorial System (ITS) to a group based Intelligent Classroom Tutoring System (ICTS). 5 successful experiments were conducted (a alongside 1 miserable failure) to support the ICTS. 2 experiments tested a new mod-SYMLOG framework for modelling groups interactions.  3 experiments composing of both AI and human studies, examining a new mod-Snowdrift game to produce a predictive mechanism for group interaction.

 

Bio: Dr (the ink is still wet) Ed Longford is a PhD graduate in Computer Science and Electronic Systems from the University of Essex and is currently a lecturer in the Creative Computing department at Bath Spa University.  After failing to get a proper job after receiving an undergraduate degree in International Relations and Politics from the University of Plymouth in 2007, Ed took the Computer Science conversion MSc then offered at the University of Essex and worked as a DBA for 7 years. Returning to Essex in 2017 for his PhD studies, the gluten for punishment that he is, Ed GLAed for around a dozen different undergraduate and masters modules before graduating - though probably best known for occasional baking. Ed now spends his time trying to teach students about databases, data science, critical thinking, and science fiction writing. 



16 February 2022

Title: One-Shot Only Real-Time Video Classification: A Case Study in Facial Emotion Recognition by Dr Arwa M.A. Basbrain

 

Abstract: Video classification is an important research field due to its applications ranging from human action recognition for video surveillance to emotion recognition for human-computer interaction. This paper proposes a new method called One-Shot Only (OSO) for real-time video classification with a case study in facial emotion recognition. Instead of using 3D convolutional neural networks (CNN) or multiple 2D CNNs with decision fusion as in the previous studies, the OSO method tackles video classification as a single image classification problem by spatially rearranging video frames using frame selection or clustering strategies to form a simple representative storyboard for spatio-temporal video information fusion. It uses a single 2D CNN for video classification and thus can be optimised end-to-end directly in terms of classification accuracy. Experimental results show that the OSO method proposed in this paper outperformed multiple 2D CNNs with decision fusion by a large margin in terms of classification accuracy (by up to 13%) on the AFEW 7.0 dataset for video classification. It is also very fast, up to ten times faster than the commonly used 2D CNN architectures for video classification.

 

Bio: Dr Arwa Basbrain obtained her B.Sc. degree in Computer Science from King Abdul Aziz University, Saudi Arabia. After a period of working in teaching and research studies at the university, she obtained her M.Sc. degree in natural language processing and Artificial Intelligence from King Abdul Aziz University. She received her PhD in video classification at the University of Essex, the United Kingdom, in 2022.  



02 February 2022

Title: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dr Yashin Dicente Cid

 

Abstract: Dr Yashin Dicente Cid will walk us through the paper “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” by Dosovitskiy et al 2020. You can find the full paper here: https://arxiv.org/abs/2010.11929 

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

 

Bio: Dr Yashin Dicente Cid obtained his B.Sc. degree in Mathematics from University of Barcelona, Spain in 2008. After a period working in the private sector, he obtained his M.Sc. degree in Computer Vision and Artificial Intelligence from Autonomous University of Barcelona, Spain in 2012. He obtained his Ph.D. in medical imaging at University of Geneva, Switzerland in 2018. During his Ph.D. he worked as a research assistant in the MedGIFT group at the University of Applied Sciences Western Switzerland, Sierre (HES-SO), where he continued as postdoctoral researcher until 2019. Since then, he worked as a research fellow in the Data Science group at the University of Warwick, the UK.

01 December 2021

Slides: Click here to download presentation

Title: Generative Adversarial Networks: a review of their applications By Matteo Lai

Abstract: Generative Adversarial Networks (GANs) are a class of machine learning frameworks made up of two neural networks which play a zero-sum game, pitting one against the other, in order to generate synthetic instances of data. We will discuss the evolution of GANs since they were first proposed by Goodfellow et al. in 2014, to date, analysing different applications found in the literature.

Bio: Matteo Lai is a Masters student in Biomedical Engineering at University of Bologna. He majored in Biomedical Engineering at the University of Cagliari. He is currently doing his master thesis at the University of Essex, under the supervision of Professor Luca Citi, studying the application of GANs in medical imaging.

03 November 2021

Slides: Click here to download presentation

Title: Topotecan Penetration Analysis in Retinoblastoma cell cultures By Dr Rodrigo Ramele

Abstract: Retinoblastoma is a common intraocular tumor of childhood. One of the medications used as antineoplastic agent for retinoblastoma treatment is topotecan. Its penetration into living tumorspheres is quantified using confocal microscopy. Topotecan is a fluorescent drug and it dyes the living tissue. Then, it is recorded in a sequence of images over a period of time. The effective penetration of the drug depends on culture characteristics and requires a very specific timing which is calculated empirically by an expert. The purpose of this work is to offer a model to automatically estimate and evaluate the dynamics of the penetration of topotecan in cells, based on the information obtained from a sequence of tumorsphere images.

Biohttps://www.essex.ac.uk/people/ramel02906/rodrigo-ramele

20 October 2021

Slides: Click here to download presentation

Title: Deep Neural Network Architectures for Automated Diagnosis of Pulmonary Tuberculosis using Chest X-ray By Ekin Yagis

Abstract: Tuberculosis (TB) is still a serious public health concern across the world, causing 1.4 million deaths each year. There has been a scarcity of radiological interpretation skills in many TB-infested locations, which may cause poor diagnosis rates and poor patient outcomes. A cost-effective and efficient automated technique might help screening evaluations in poor countries and provide early illness diagnosis. In this work, two deep learning-based methods for the automated diagnosis of TB are proposed. The models have been tested on two publicly available and one private datasets. Both proposed deep learning-based automated detection systems have shown high accuracy and specificity in detecting chest radiographs with active pulmonary TB from a multi-national patient cohort. The best performing model – multi-scale residual neural network with deep layer aggregation – accurately classified TB using chest radiography with an AUC of 0.98. The methodology and findings can point to a viable route for more accurate and quicker TB detection especially in low- and middle-income nations.

Bio: Ekin Yagis is a Ph.D. student in Computer Science and Electrical Engineering at the University of Essex. She majored in Electrical Engineering at the Koc University and holds an M.Sc degree from Sabanci University, Istanbul. She works as a research officer in CSEE under supervision of  Dr. Alba Garcia and Dr. Vahid Abolghasemi. Her research interests include medical image processing, machine learning, and computer vision. She is recently focusing on the detection of neurodegeneretive diseases such as Parkinson’s and Alzheimer’s diseases using machine learning.

6 October 2021

Slides: Click here to download presentation

Title: Beyond Validation: Characterising Modes of Segmentation Failure – Part 2. By Tasos Papastylianou

Abstract: Validation typically answers the question “to what extent does a segmentation algorithm work?”. However, this fails to address the equally interesting questions of why, when, or how an algorithm succeeds or fails. In many clinical applications, these are non-trivial questions, since some failures may be more clinically relevant than others. We demonstrate how modes of segmentation failure can be investigated in an anatomically meaningful manner with the use of appropriate fuzzy maps / masks, which can be straightforwardly constructed to express meaningful anatomical relationships between a probabilistic segmentation object and its ground truth. This allows us to ask questions like “how much of the segmentation’s failure occurs near anatomical landmark X”, or “at an approximate distance Y from X”, “in the general direction of Z”, “twice as bad near X as near Z”, “how much of the failed parts are due to the presence of a particular anatomical substructure or anatomical artifact”, etc, thereby providing an extra layer of explainability to the validation process.

Bio:  Dr Tasos Papastylianou is a Senior Research Officer at the Brain-Computer Interfaces and Neural Engineering (BCI-NE) lab at the University of Essex, working on a US-UK Bilateral Academic Research Initiative (BARI) project led by Prof. Riccardo Poli, looking at human-AI collaborative decision-making. Prior to this, he worked as a Machine Learning and Biomedical Signal Processing researcher for the Nevermind Project (http://nevermindproject.eu/), involving intelligent tools and systems enabling depression self management in patients with secondary depression, led locally by Dr Luca Citi. He was awarded his DPhil in 2017, in the area of Biomedical Engineering and Healthcare Innovation, and specifically Medical Image Analysis at the University of Oxford. In the past he also worked as a qualified physician in the NHS and as a concert pianist. He is particularly interested in tackling problems involving applications of AI / ML in clinical practice.

2 June 2021

Title: Human Recognition Based on Multi-biometric Systems by Inas Al-taie 

Abstract: Biometrics are fundamental to a wide range of technologies that require credible authentication approach to approve personal identification. This work aims to identify effective features and machine learning methods for human recognition based on multiple biometrics and produce the sufficient combination of single biometric systems suitable in specific applications for identification purposes. For example, banking systems which use multi-biometric authentication for login procedures and the police and criminal evidence applications. An implementation of a person identification system fusing different combinations of biometric modalities; face, ear, eye, hand, and palmprint at score level has I been examined in this work. 

Bio: Dr. Inas Al-taie is a Senior Research Officer in the Brain-Computer Interfacing and Neural Engineering Laboratory in School of Computer Science and Electronic Engineering at University of Essex. Inas had more than 7 years of academic experience and 5 years of working on Remote Sensing Technology Research before she received her PhD in Computer Science in 2020 from Essex University, United Kingdom.  Research interest is cognitive neuroscience of visual object recognition, Computer Vision and Image Processing, Human Biometric Recognition and exploring the use of multiple biometrics (based on behavioural and physiological traits) and how their use could out-perform single biometrics, Machine Learning and Artificial intelligence with focusing on deep and shallow convolutional neural networks (CNN). 

19 May 2021

Title: Open Discussion about Gesture Recognition lead by Dr Anna Caute

Abstract: Aphasia is a communication disability caused by stroke or other brain injury. It affects 450,000 people in the UK and has a profound impact on quality of life. It is a heterogenous condition, which varies in severity and can affect all aspects of communication, both verbal and non-verbal. Research shows that use and understanding of gesture can be affected in people with aphasia (PWA). Whereas healthy speakers use a wide range of gesture types (e.g. pointing, pretending to use an object), PWA use a limited range. However, PWA rely on gesture more than healthy speakers to get their message across (van Nispen, 2017).

Speech and Language Therapists often encourage gesture use during therapy. However, it is an area that has been under-explored in research and clinicians lack evidence-based tools to assess gesture (Caute et al, 2021). Gesture assessment poses many challenges- unlike spoken language, it is hard to describe in written form due to its holistic, imagistic, transitory nature. In gesture research, coding categories are used to describe gesture forms and functions. Coding provides rich, descriptive data, but is prohibitively time-consuming to use in clinical practice.

In future, a potential solution to this may lie in technology. Novel research has explored the use of motion-tracking technology to analyse the kinematic features of gesture (Trujillo et al., 2019). Technology has also been employed to deliver computer gesture therapy for people with aphasia (Roper et al, 2016). This seminar will be an open discussion about the potential for gesture recognition technology to facilitate the clinical assessment of gesture.

Bio: Dr Anna Caute is a Lecturer in Speech and Language Therapy in the School of Health and Social Care. Her main research interests are in gesture and the use of technology in aphasia therapy. Her PhD investigated the benefits of gesture therapy for people with severe aphasia. She has researched a variety of technological applications in therapy. Recent studies have investigated the use of e-readers, text-to-speech software and portable smart camera technologies to facilitate reading for people with aphasia, the use of voice recognition software to facilitate writing and the development of a novel gesture screening tool.

24 March 2021

Title: Convolutional Autoencoder based Deep Learning Approach for Alzheimer’s Disease Diagnosis using Brain MRI by Ekin Yagis

Abstract: Rapid and accurate diagnosis of Alzheimer’s disease (AD) is critical for patient treatment, especially in the early stages of the disease. While computer-assisted diagnosis based on neuroimaging holds vast potential for helping clinicians detect disease sooner, there are still some technical hurdles to overcome. This study presents an end-to-end disease detection approach using convolutional autoencoders by integrating supervised pre- diction and unsupervised representation. The 2D neural network is based upon a pre-trained 2D convolutional autoencoder, to capture latent representations in structural brain MRI scans. Experiments on the OASIS brain MRI dataset revealed that the model outperforms a number of traditional classifiers in terms of accuracy using single slice.

Bio: Ekin Yagis is a Ph.D. student in Computer Science and Electrical Engineering at the University of Essex. She majored in Electrical Engineering at the Koc University and holds an M.Sc degree from Sabanci University, Istanbul. She works as a research officer in CSEE under supervision of  Dr. Alba Garcia and Dr. Vahid Abolghasemi. Her research interests include medical image processing, machine learning, and computer vision. She is recently focusing on the detection of neurodegeneretive diseases such as Parkinson’s and Alzheimer’s diseases using machine learning.

10 March 2021

Slides: Click here to download presentation

Title: How does human Chemosignals influence the brain network involved in decision-making? by Saideh Ferdowsi

Abstract: Chemosensory communication is known as an effective way to influence the human emotion system. Phenomena like food selection or motivation, based on chemical signals, present a unique pathway between chemosensory and emotion systems. Human chemosignals (e.g. sweat) which are produced during different emotional states contain associated distinctive odors and are able to induce same emotions in other people. For instance, sweat is known as a social chemosignal participating in social interaction. Chemosignal perception engages a distributed neural network which has not been well characterized yet. In this talk, Dr Saideh Ferdows is going to illustrate how functional magnetic resonance imaging (fMRI) can be used to investigate the neural circuits underlying social emotional chemosignal processing.

Bio: Dr Saideh Ferdowsi is a senior research officer at University of Essex working on the POTION project. POTION indicates promoting social interaction through emotional body odours. She received her PhD from the University of Surrey in Biomedical signal and Image processing. Her main research interests are biomedical signal and image processing, data fusion, blind source separation and machine/deep learning, Bayesian inferencing and brain connectivity. Saideh has been exploring the application of signal processing and statistical methods for analysis of EEG and fMRI data of the human brain. Results of her researches have been published in peer reviewed journals. 

24 February 2021

Slides: Click here to download presentation

Title: Automation of medical document processing by Srinidhi Karthikeyan 

Abstract: Did you know how many hours pharmacist spend on processing the medical documents? In the current situation, we need more healthcare workers than ever, we can’t afford to let the pharmacist work on the documents. What can be done to solve this? Automate the work! We are working on the project that involves automation of the medical documents like discharge summaries, referral letters, accident emergency letters, etc. But most of the documents are scanned images and pdfs which doesn’t allow the OCR engine to achieve the best accuracy.  What are the challenges in processing medical documents, how to solve them? Does using a language model help us improve the accuracy of OCR output?  

Bio: Srinidhi Karthikeyan was born in India in 1997. She completed her bachelor’s degree in Computer Science and Engineering from Anna University, Chennai in 2019 and in 2020 she completed her master’s degree in Data Science from the University of Essex, Colchester, UK. Her main area of interest is Natural Language processing and computer vision.

9 February 2021

Slides: Click here to download presentation

Title: The histogram of gradient orientation for BCI processing: Capturing Waveforms by Rodrigo Ramele

Abstract: This talk presents a method to analyze Electroencephalographic (EEG) signals based on the analysis of their waveform shapes.  This method mimics what electroencephalographers have been doing clinically, visually inspecting, and categorizing phenomena within the EEG by the extraction of features from waveforms. These features are constructed based on the calculation of histograms of oriented gradients (i.e. SIFT) from pixels around the signal plot. This methodology could be potentially used to provide an objective framework to analyze, characterize and classify EEG signal waveforms.  We will explore the feasibility of this approach by detecting several signals widely used in the field of Brain Computer Interfaces, particularly the P300, an ERP elicited by the oddball paradigm of rare events.

Bio: Dr Rodrigo Ramele is a Computer Engineering from the Universidad Nacional de La Matanza (Argentina). He holds a Graduate Specialization in Cryptography from the Instituto Enseñanza Superior M.Savio (Argentina) and Graduate Research Specialization in Robotics and Bioengineering from Tohoku University (Japan). He completed his Ph.D in Brain Computer Interfaces at the Instituto Tecnológico de Buenos Aires (Argentina) in 2018, working on the analysis of EEG using Computer Vision techniques. Currently working as Senior Research Officer at the BCI-NE Lab of Essex University, working on BCI-based collaboratively decision making.

27 January 2021

Slides: Click here to download presentation

Title: Beyond Validation: Characterising Modes of Segmentation Failure by Tasos Papastylianou

Abstract: Validation typically answers the question “to what extent does a segmentation algorithm work?”. However, this fails to address the equally interesting questions of why, when, or how an algorithm succeeds or fails. In many clinical applications, these are non-trivial questions, since some failures may be more clinically relevant than others. We demonstrate how modes of segmentation failure can be investigated in an anatomically meaningful manner with the use of appropriate fuzzy maps / masks, which can be straightforwardly constructed to express meaningful anatomical relationships between a probabilistic segmentation object and its ground truth. This allows us to ask questions like “how much of the segmentation’s failure occurs near anatomical landmark X”, or “at an approximate distance Y from X”, “in the general direction of Z”, “twice as bad near X as near Z”, “how much of the failed parts are due to the presence of a particular anatomical substructure or anatomical artifact”, etc, thereby providing an extra layer of explainability to the validation process.

Bio:  Dr Tasos Papastylianou is a Senior Research Officer at the Brain-Computer Interfaces and Neural Engineering (BCI-NE) lab at the University of Essex, working on a US-UK Bilateral Academic Research Initiative (BARI) project led by Prof. Riccardo Poli, looking at human-AI collaborative decision-making. Prior to this, he worked as a Machine Learning and Biomedical Signal Processing researcher for the Nevermind Project (http://nevermindproject.eu/), involving intelligent tools and systems enabling depression self management in patients with secondary depression, led locally by Dr Luca Citi. He was awarded his DPhil in 2017, in the area of Biomedical Engineering and Healthcare Innovation, and specifically Medical Image Analysis at the University of Oxford. In the past he also worked as a qualified physician in the NHS and as a concert pianist. He is particularly interested in tackling problems involving applications of AI / ML in clinical practice.

2 December 2020

Slides: Click here to download presentation

Title: Essex at MediaEval Predicting Media Memorability 2020 task by Janadhip Jacutprakart

Abstract: In this presentation, Janadhip will present the approaches and results of the participation of the Essex-NLIP team to the MediaEval 2020 Predicting Media Memorability task. The task requires participants to build systems that can predict short-term and long-term memorability scores on real-world video samples provided. The Essex-NLIP team investigated the use of different pre-computed features to predict the performance of memorability scores with various regressions. We used Random Forest, Decision Tree, Gradient Boosting, Extra Tree and Sequential regression models in this experiment. Different pre-computed features were compared using regression. Additionally, feature-fusion models are proposed in this paper to explore the efficiency of possible models to provide enhanced and accurate prediction outcome for both short-term and long-term memorability. 

Bio: Janadhip Jacutprakart is currently a Computer Science PhD student at the University of Essex. She finished her MSc in Data Science with Merit at the University of Essex in September 2020, received an MBA in marketing and management from the University of Thai Chamber of Commerce in 2018 and a Bachelor of Technology in Computer Graphic and Multimedia from Bangkok University International in 2009. She is currently pursuing her PhD study under the supervision of Dr Alba García Seco De Herrera. Her research focuses on the computer vision in radiology imaging with multi-disciplinary on information retrieval and natural language processing. She’s also a researcher in the ImageCLEFcaption research project with Dr Alba Garcia Seco De Herrera.

18 November 2020

Slides: Click here to download presentation

Title: Simple Effective Methods for Decision-Level Fusion in Two-Stream Convolutional Neural Networks for Video Classification by Rukiye Savran Kiziltepe

Abstract: Convolutional Neural Networks (CNNs) have recently been applied for video classification applications where various methods for combining the appearance (spatial) and motion (temporal) information from video clips are considered. The most common method for combining the spatial and temporal information for video classification is averaging prediction scores at the softmax layer. Inspired by the Mycin uncertainty system for combining production rules in expert systems, this study proposes using the Mycin formula for decision fusion in two-stream convolutional neural networks. In this talk, a comparative study of different decision fusion formulas for video classification will be presented.  

Bio: Rukiye Savran Kiziltepe is a Ph.D. student in the School of Computer Science and Electronic Engineering at the University of Essex. She received a B.Sc. degree from Hacettepe University, Ankara in 2014, and an M.Sc. degree from the University of Essex in 2017. She is currently pursuing her Ph.D. studies under the supervision of Prof. John Gan. Her research concentrates on the study and development of deep learning schemes for video classification. Rukiye’ s research interests include machine learning, video processing, and computer vision. She is particularly interested in video classification using deep learning techniques.  

4 November 2020

Slides: Click here to download presentation

Title:  Deep neural ensembles for improved pulmonary abnormality detection in chest radiographs by Sivarama Krishnan Rajaraman

Abstract: Cardiopulmonary diseases account for a significant proportion of deaths and disabilities across the world. Chest X-rays are a common diagnostic imaging modality for confirming intra-thoracic cardiopulmonary abnormalities. However, there remains an acute shortage of expert radiologists, particularly in under-resourced settings that results in interpretation delays and could have global health impact. These issues can be mitigated by an artificial intelligence (AI) powered computer-aided diagnostic (CADx) system. Such a system could help supplement decision-making and improve throughput while preserving and possibly improving the standard-of-care. A majority of such AI-based diagnostic tools at present use data-driven deep learning (DL) models that perform automated feature extraction and classification. Convolutional neural networks (CNN), a class of DL models, have gained significant research prominence in tasks related to image classification, detection, and localization. The literature reveals that they deliver promising results that scale impressively with an increasing number of training samples and computational resources. However,  the techniques may be adversely impacted due to their sensitivity to high variance or fluctuations in training data. Ensemble learning helps mitigate these by combining predictions  and blending intelligence from multiple learning algorithms. Complex non-linear functions constructed within ensembles help improve robustness and generalization. Empirical result predictions have demonstrated superiority over the conventional approach with stand-alone CNN models. In this talk, I will describe example work at the NLM that use model ensembles to improve pulmonary abnormality detection in chest radiographs.

Bio: Dr. Sivaramakrishnan Rajaraman joined the Lister Hill National Center for Biomedical Communications (LHNCBC), National Library of Medicine (NLM), National Institutes of Health (NIH), as a postdoctoral researcher in 2016. Dr. Rajaraman received his Ph.D. in Information and Communication Engineering from Anna University, Chennai, India. He is involved in projects that aim to apply computational sciences and engineering techniques toward advancing life science applications. These projects involve use of medical images for aiding healthcare professionals in low-cost decision-making at the point of care screening/diagnostics. Dr. Rajaraman is a versatile researcher with expertise in machine learning, data science, biomedical image analysis/understanding, and computer vision. He has more than 15 years of experience in academia where he taught core and allied subjects in biomedical engineering. He has authored several national and international journal and conference publications in his area of expertise.

1 July 2020

Slides: Click here to download presentation

Title:  Essex at ImageCLEFcaption 2020 task: Medical Concept Detection with Image retrieval by Francisco Parrilla Andrade, Luke Bentley and Arely Aceves Compean

Abstract: ImageCLEF 2020 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. In this seminar we will describe our participation at the ImageCLEFcaption 2020 task.  Based on the visual image content, ImageCLEFcaption 2020 task provides the building blocks for medical image understanding step by identifying the individual components from which captions are composed. The concepts can be further applied for context-based image and information retrieval purposes. Our approach identifies the presence of relevant concepts in a large corpus of medical images with an image retrieval methodology using features extracted via DenseNet-121 model.

Bio: Francisco Parrilla Andrade, Luke Bentley and Arely Aceves Compean are currently studying at the University of Essex. As part of their work at Essex they have developed a solution for the ImageCLEFcaption 2020 achieving the 3rd position at the benchmark. 

17 June 2020

Slides: Click here to download presentation

Title:  From Research to Application: Using Computer Vision-Based Neural Networks to Reduce Food Waste by Somdip Dey

Abstract: Computer scientists at the University of Essex developed an AI-powered food stock management app – nosh – to help users remember the expiry date of stocked items before they expire. Among many existing cool features such as recipe suggestions and showing user’s food buying and wasting trends to help reduce food waste in the household, computer vision-based neural network models are being introduced in the app to make it easier for the user to keep track of stocked food. This talk will present the working theory on how cutting edge research could be transferred into real-world applications to help people and society such as reducing food waste. Examples of using computer vision-based neural networks in the nosh application are discussed as case studies.

Bio: Somdip Dey is currently an Artificial Intelligence Ph.D. candidate working on embedded systems at the University of Essex, the U.K. His current research interests include affordable artificial intelligence, information security, computer systems engineering and computing resources optimization for performance, energy, temperature, reliability, and security in mobile platforms. He has also served as a Reviewer and TPC Member for several top conferences such as DATE, DAC, AAAI, CVPR, ICCV, ASAP, IEEE EdgeCom, IEEE CSCloud, and IEEE CSE. 

3 June 2020

Slides: Click here to download presentation

Title:  TMAV: Temporal Motionless Analysis of Video using CNN in MPSoC by Somdip Dey

Abstract: Analyzing video for traffic categorization is an important pillar of Intelligent Transport Systems. However, it is difficult to analyze and predict traffic based on image frames because the representation of each frame may vary significantly within a short time period. This also would inaccurately represent the traffic over a longer period of time such as the case of video. We propose a novel human-inspired methodology that integrates analysis of the previous image frames of the video to represent the analysis of the current image frame, the same way a human being analyzes the current situation based on past experience. In our proposed methodology, called IRON-MAN (Integrated Rational prediction and Motionless ANalysis), we utilize Bayesian update on top of the individual image frame analysis in the videos and this has resulted in highly accurate prediction of Temporal Motionless Analysis of the Videos (TMAV) for most of the chosen test cases. The proposed approach could be used for TMAV using Convolutional Neural Network (CNN) for applications where the number of objects in an image is the deciding factor for prediction and results also show that our proposed approach outperforms the state-of-the-art for the chosen test case. We also introduce a new metric named, Energy Consumption per Training Image (ECTI). Since, different CNN based models have different training capability and computing resource utilization, some of the models are more suitable for embedded device implementation than the others, and ECTI metric is useful to assess the suitability of using a CNN model in multi-processor systems-on-chips (MPSoCs) with a focus on energy consumption and reliability in terms of lifespan of the embedded device using these MPSoCs.

Bio: Somdip Dey is currently an Artificial Intelligence Ph.D. candidate working on embedded systems at the University of Essex, the U.K. His current research interests include affordable artificial intelligence, information security, computer systems engineering and computing resources optimization for performance, energy, temperature, reliability, and security in mobile platforms. He has also served as a Reviewer and TPC Member for several top conferences such as DATE, DAC, AAAI, CVPR, ICCV, ASAP, IEEE EdgeCom, IEEE CSCloud, and IEEE CSE. 

20 May 2020


Title3D Convolutional Neural Networks for Diagnosis of Alzheimer’s Disease via structural MRI by Ekin Yagis

 

Abstract: Alzheimer’s Disease (AD) is a widespread neurodegenerative disease caused by structural changes in brain and leads to deterioration of cognitive functions. Patients usually experience diagnostic symptoms at later stages after irreversible neural damage occurs. Early detection of AD is crucial in patients’ quality of life and start treatments to decelerate the progress of the disease. Early detection may be possible via computer assisted systems using neuroimaging data. Among all, deep learning utilizing magnetic resonance imaging (MRI) have become prominent tool due to its capability to extract high level features through local connectivity, weight sharing, and spatial invariance. In this paper, we built a 3D VGG variant convolutional neural network (CNN) to investigate the classification accuracy based on two publicly available data sets, namely, ADNI and OASIS. We used 3D models to prevent information loss from 3D MRI in process of slicing and analysing by 2D convolutional filters in 2D models.

 

Bio: Ekin is a Ph.D. student in Computer Science and Electrical Engineering at the University of Essex. She majored in Electrical Engineering at the Koc University and holds an M.Sc degree from Sabanci University, Istanbul.  She works as a research assistant in Nevermind project under supervision of Dr. Luca Citi and Dr. Alba García Seco de Herrera. Her research interests include medical image processing, machine learning, and computer vision. She is recently focusing on the detection of neurodegenerative diseases such as Parkinson’s and Alzheimer’s diseases using machine learning.

 

6 May 2020


Title: Adaptive Vision for Human Robot Collaboration by Dimitri Ognibene 

Abstract: Unstructured social environments, e.g. building sites, release an overwhelming amount of information yet behaviorally relevant variables may be not directly accessible. Currently proposed solutions for specific tasks, e.g. autonomous cars, usually employ over redundant, expensive, and computationally demanding sensory systems that attempt to cover the wide set of sensing conditions which the system may have to deal with. Adaptive control of the sensors and of the perception process input is a key solution found by nature to cope with such problems, as shown by the foveal anatomy of the eye and its high mobility and control accuracy. The design principles of systems that adaptively find and selects relevant information are important for both Robotics and Cognitive Neuroscience. At the same time, collaborative robotics has recently progressed to human-robot interaction in real manufacturing. Measuring and modeling task specific gaze behaviour is mandatory to support smooth human robot interaction. Indeed, anticipatory control for human-in-the-loop architectures, which can enable robots to proactively collaborate with humans, heavily relies on observed gaze and actions patterns of their human partners. The talk will describe several systems employing adaptive vision to support robot behavior and their collaboration with humans. 

Bio: Dr Dimitri Ognibene obtained his PhD in Robotics from the University of Genoa in 2009. Before joining the University of Essex, he has been performing experimental studies and developing formal methods for active social perception at UPF, Barcelona, as a Marie Skodowska-Curie COFUND Fellow; developing algorithms for active vision in industrial robotic tasks as a Research Associate (RA) at Centre for Robotics Research, Kings College London; devising Bayesian methods and robotic models for attention in social and dynamic environments as a RA at the Personal Robotics Laboratory in Imperial College London; studying interaction between active vision and autonomous learning in neuro-robotic models as a RA at Institute of Cognitive Science and Technologies of the Italian Research Council (ISTC CNR). He also collaborated with Wellcome Trust Centre for Neuroimaging (UCL) to address the exploration issue in the currently dominant neurocomputational modelling paradigm. Dr Ognibene has also been Visiting Researcher at Bounded Resource Reasoning Laboratory in UMass and at University of Reykjavik (Iceland) exploring the symmetries between active sensor control and active computation or metareasoning. Dr Ognibene presented his work in several international conferences on artificial intelligence, adaptation, and development and published on international peer-reviewed journals. Dr Ognibene was invited to speak at the International Symposium for Attention in Cognitive Systems (2013 and 2014) as well as in other various neuroscience, robotics and machine-learning international venues. Dr Ognibene is Associate Editor of Paladyn, Journal of Behavioral Robotics, and has been part of the Program Committee of several conferences and symposiums.

22 April 2020

 

Title: Single Sample Augmentation using GANs for Enhancing the Performance of Image Classification by  Shih-Kai Hung

Abstract: It is difficult to achieve high performance without sufficient training data for deep convolutional neural networks (DCNNs) to learn. Data augmentation plays an important role in improving robustness and preventing overfitting in machine learning for many applications such as image classification. In this paper, a novel method for data augmentation is proposed to solve the problem of machine learning with small training datasets. The proposed method can synthesize similar images with rich diversity from only a single original training sample to increase the number of training data by using generative adversarial networks (GANs). It is expected that the synthesized images possess class-informative features, which may be in the validation or testing data but not in the training data due to that the training dataset is small, and thus they can be effective as augmented training data to improve the classification accuracy of DCNNs. The experimental results have demonstrated that the proposed method with a novel GAN structure for training image data augmentation can significantly enhance the classification performance of DCNNs for applications where original training data is limited. 

Bio: Shih-Kai Hung is a first-year PhD student in CESS, working on computer vision and deep learning. He completed his BSc in Electric Engineer department but shifted his MSc thesis towards computer science through internet security and watermark. His current research focuses on image synthesis using generative adversarial networks (GANs) to solve the problem of owning very limited data in image classification, which traditionally requires a large number of labelled images to reach the high performance. 

4 March 2020


Title: DisplaceNet by Dr Grigorios Kalliatakis

Abstract: Every year millions of men, women and children are forced to leave their homes and seek refuge from wars, human rights violations, persecution, and natural disasters. The number of forcibly displaced people came at a record rate of 44,400 every day throughout 2017, raising the cumulative total to 68.5 million at the years end, overtaken the total population of the United Kingdom. Up to 85% of the forcibly displaced find refuge in low- and middle income countries, calling for increased humanitarian assistance worldwide. To reduce the amount of manual labour required for human-rights-related image analysis, we introduce DisplaceNet, a novel model which infers potential displaced people from images by integrating the control level of the situation and conventional convolutional neural network (CNN) classifier into one framework for image classification.  

Bio: Dr Grigorios Kalliatakis is a computer vision researcher with a computer science background in image analysis and AI working with the ESRC Human Rights, Big Data and Technology (HRBDT) Project, housed at University of Essex. His current research focuses on the development and application of methods for interpreting and analysing complex imagery in order to automate the visual recognition of various human rights violations. He has experience on a variety of computer vision-related topics, from image classification and image interpretation to scene understanding and big data. 

19 February 2020

Slides: Click here to download presentation 

Title: SoCodeCNN: A new approach to teaching machines to understand program source code using computer vision methodologies by Somdip Dey

Abstract: Automated feature extraction from program source-code such that proper computing resources could be allocated to the program is very difficult given the current state of technology. Therefore, conventional methods call for skilled human intervention in order to achieve the task of feature extraction from programs. This research work named SoCodeCNN is the first to propose a novel human-inspired approach to automatically convert program source-codes to visual images. The images could be then utilized for automated classification by visual convolutional neural network (CNN) based algorithm. 

Bio: Somdip Dey is currently an Artificial Intelligence Ph.D. candidate working on embedded systems at the University of Essex, the U.K. His current research interests include affordable artificial intelligence, information security, computer systems engineering and computing resources optimization for performance, energy, temperature, reliability, and security in mobile platforms. He has also served as a Reviewer and TPC Member for several top conferences such as DATE, DAC, AAAI, CVPR, ICCV, ASAP, IEEE EdgeCom, IEEE CSCloud, and IEEE CSE. 


5 February 2020


Abstract: Marine conservation often relies on long term monitoring projects to effectively set and maintain sustainability goals. Current methods are time-consuming and restrictive for large scale areas, often relying on human annotation for classifying and quantifying substrates either in-situ or photographically. ImageCLEFcoral was set up in 2019 to challenge teams in developing systems for the automatic annotation and localisation of substrates from photographs, with the aim of greatly speeding up data collection and allow for monitoring to be expanded in scale and scope. The naturally varied morphology of reef substrates pose a greater challenge than normally faced by machine and deep learning algorithms, making this one of the more complex ImageCLEF tasks. Entrants to the 2020 task will continue to push forward in this challenge, with the aim to continually improve the accuracy of annotations year after year. 

BioJessica Wright is an interdisciplinary PhD student in CSEE and Biological Sciences, working in 3D reconstruction of coral reef systems. She completed her BSc and MSc in Marine Biology, but her MSc thesis shifted her towards computer science through 3D modelling as a tool for reef complexity measurements and monitoring of natural systems. Soon after her PhD began, she started working with the ImageCLEFcoral team to annotate reef substrate images in the hopes of developing an effective system for monitoring reefs and prioritising conservation where it is most needed. 

22 January 2020

Slides: Click here to download presentation

Title: Evaluation of Fuzzy and Probabilistic segmentation algorithms by Dr Tasos Papastylianou

Abstract: Validation is a key concept in the development and assessment of medical image segmentation algorithms. However, the proliferation of modern, non-deterministic segmentation algorithms has not been met by an equivalent improvement in validation strategies. 

In this talk, we will be making the case that extant validation practices can lead to false results in the presence of probabilistic algorithms and gold standards. We will then briefly examine the state of the art in validation, and propose an improved validation method for non-deterministic segmentations, showing that it improves validation precision and accuracy on both synthetic and clinical sets, compared to more traditional (but still widely used) methods and state of the art. 

Bio:  Dr Tasos Papastylianou is a Senior Research Officer in Machine Learning and Biomedical Signal Processing, working on the Nevermind Project (http://nevermindproject.eu/ ) which involves intelligent tools and systems enabling depression self management in patients with secondary depression. Prior to this he was awarded his DPhil in November 2017, in the area of Biomedical Engineering, and specifically Medical Image Analysis, via the CDT in Healthcare Innovation at the University of Oxford. During this time he also co-founded Sentimoto Ltd, a company specialising in wearable and mobile analytics for older adults. Before his DPhil, Tasos was a qualified physician working in the NHS. 

4 December 2019


Title: Combining Very Deep Convolutional Neural Networks and Recurrent Neural Networks for Video Classification by Rukiye Savran Kiziltepe

Abstract: Convolutional Neural Networks (CNNs) have been demonstrated to produce outstanding performance in image classification problems. Recurrent Neural Networks (RNNs) have been utilized to make use of temporal information for time series classification. The main goal of this study is to examine how temporal information between frame sequences can be used to improve the performance of video classification using RNNs. In this talk, a comparative study of seven video classification network architectures will be presented. 

BioRukiye Savran Kiziltepe is a Ph.D. student in the School of Computer Science and Electronic Engineering at the University of Essex. She received a B.Sc. degree from Hacettepe University, Ankara in 2014, and an M.Sc. degree from the University of Essex in 2017. She is currently pursuing her Ph.D. studies under the supervision of Prof. John Gan. Her research concentrates on the study and development of deep learning schemes for video classification. Rukiye’ s research interests include machine learning, video processing, and computer vision. She is particularly interested in video classification using deep learning techniques. 

13 November 2019

Title: Deep Learning for Neurological Disease Classification by Ekin Yagis

Abstract: In recent years, convolutional neural networks (CNNs) have been used to detect and classify a range of diseases from cancer to neurological disorders. In this talk, generalization performance of the networks on the classification of the two most common neurological disorders namely Parkinson’s Disease (PD) and Alzheimer’s Disease (AD) will be discussed. 

BioEkin Yagis is a Ph.D. student in Computer Science and Electrical Engineering at the University of Essex. She majored in Electrical Engineering at the Koc University and holds an M.Sc degree from Sabanci University, Istanbul.  She works as a research assistant in Nevermind project under supervision of Dr. Luca Citi and Dr. Alba García Seco de Herrera. Her research interests include medical image processing, machine learning, and computer vision. She is recently focusing on the detection of neurodegeneretive diseases such as Parkinson’s and Alzheimer’s diseases using machine learning.