Yes! I love to do research projects! Why? Some reasons have been beautifully summed up here.
Broadly, I am interested in Deep Learning and its applications to various areas such as Geometry, and Natural Language Processing.
Below I have described my research interests and a list of my publications.
Here are some problem statements that I have worked on in the past:
Adversarial Geometry: Deep Neural Nets are vulnerable to adversarial examples (Madry et al.). Now, as per the Manifold Hypothesis, the datasets that are used to train Neural Networks lie on an underlying low-dimensional manifold. Stutz et al. showed that these adversarial examples lie “off” the data manifold. I worked on finding methods to train NN-based classifiers with explicit signals about the geometry of the underlying data manifold, so that they can use this knowledge as a safeguard against adversarial examples.
Deriving aspect-based structure from unstructured text under minimal supervision: Given a product review of a product like a “laptop”, how can we extract what it says about various aspects , like “RAM”, “audio system”, etc. without any/minimal annotated data about the same. I worked on a similar, problem during my Research Internship at the Max Planck Institute for Informatics in Saarbruecken.
Functional Maps for Non-isometric Shape Correspondences: Functional Maps tell us how a function defined on a manifold or mesh would change under deformations. They have emerged as useful tools for finding shape correspondences. I am trying to find ways to discover these correspondeces using functional maps on non-isometric deformations (stretching, tearing a surface).
What does a pre-trained Language Model really know?: Recent works (Roberts et al., Petroni et al.) have suggested that pre-traines Language Models can be used as a feasible replacement for traditional Knowledge Bases. I want to explore whether these pre-trained models can really capture implicit information (which often requires multi-hop reasoning), or work with Commonsense knowledge.
In the past, I have also dabbled with problems at the intersection of Data Mining, NLP, Machine Learning, and Complex Networks. My Bachelor’s thesis focussed on detection of “collusive” users in Online Social Networks, like Twitter. You can read more about it here. I have also dabbled a bit with Speech Synthesis and Recognition, Affective Computing and Numerical Methods through courses at IIIT Delhi.
I have also done smaller research excursions into Discrete Differential Geometry, Geometry Processing and Numerical Methods. These are topics that I am fond of and would love to devote more time to them in the future.
Below is a list of my publications. Along with my projects, they should give you an idea about my experience and interests.
Papers & Preprints
Analyzing and Detecting Collusive Users Involved in Blackmarket Retweeting Activities
Udit Arora, Hridoy Sankar Dutta, Brihi Joshi*, Aditya Chetan*, Tanmoy Chakraborty
In ACM Transactions on Intelligent Systems and Technology, 2020
CoReRank: Ranking to Detect Users Involved in Blackmarket-Based Collusive Retweeting Activities
Aditya Chetan*, Brihi Joshi*, Hridoy Sankar Dutta*, Tanmoy Chakraborty
In WSDM, 2019
Retweet Us, We Will Retweet You:
Spotting Collusive Retweeters Involved in Blackmarket Services
Hridoy Sankar Dutta, Aditya Chetan*, Brihi Joshi*, Tanmoy Chakraborty
In ASONAM, 2018
Did You "Read" the Next Episode? Using Textual Cues for Predicting Podcast Popularity
Brihi Joshi*, Shravika Mittal*, Aditya Chetan*
In The 1st Workshop on NLP for Music and Audio (NLP4MusA) at ISMIR 2020
Generating clues for gender based occupation de-biasing in text
Nishtha Madaan, Gautam Singh, Sameep Mehta, Aditya Chetan*, Brihi Joshi*
arXiv preprint arXiv:1804.03839