About

Hello! You've reached Valli, a quiet but curious Machine Learning and Artificial Intelligence enthusiast, Practicing Pianist, Puzzle Lover, Anime Watcher, Food Explorer and Travel Afficianado.

You can find a bit more about me below. Should you wish to contact me, the details are to the right. Thanks for visiting!

Publications

Control of Memory, Active Perception, and Action

ICML '16
Junhyuk Oh, Valliappa Chockalingam , Satinder Singh, Honglak Lee

Abstract: In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.

Link: Paper on arXiv.org, Public Release of Minecraft Mod with code for Agents

Extending World Models for Multi-Agent Reinforcement Learning in Malmo

AIIDE '18
Valliappa Chockalingam, Tegg Tae Kyong Sung, Feryal Behbahani, Rishab Gargeya, Amlesh Sivanantham, Aleksandra Malysheva

Abstract: Recent work in (deep) reinforcement learning has increasingly looked to develop better agents for multi-agent/multitask scenarios as many successes have already been seen in the usual single-task single-agent setting. In this paper, we propose a solution for a recently released benchmark which tests agents in such scenarios, namely the MARLÖ competition. Following the 2018 Jeju Deep Learning Camp, we consider a combined approach based on various ideas generated during the camp as well as suggestions for building agents from recent research trends, similar to the methodology taken in developing Rainbow (Hessel et al. 2017). These choices include the following: using model-based agents which allows for planning/simulation and reduces computation costs when learning controllers, applying distributional reinforcement learning to reduce losses incurred from using mean estimators, considering curriculum learning for task selection when tasks differ in difficulty, and graph neural networks as an approach to communicate between agents. In this paper, we motivate each of these approaches and discuss a combined approach that we believe will fare well in the competition.

Link: Paper in CEUR Workshop Proceedings: http://ceur-ws.org/Vol-2282/MARLO_110.pdf

Work Experience

Mentee

Jun. 2018 - Aug. 2018
Deep Learning Camp Jeju 2018, Jeju, South Korea

Supervised By: Yu-Han Liu and Taehoon Kim

  • Worked on extending distributional reinforcement learning agents to adapt behaviour based on risk
  • Conducted experiments to test generalization behaviour and safety aspects of novel agents

  • Research Intern

    Jul. 2017 - Oct. 2017
    Preferred Networks, Tokyo, Japan

    Supervised By: Toshiki Kataoka and Brian Vogel

  • Read up on Multiagent and Multitask Deep Reinforcement Learning
  • Focused on generalization of Multitask Deep RL agents in a setting where Language acts as an Inductive Bias
  • Conducted experiments with novel agents in various procedurally generated environments

  • Directed Independent Study Researcher

    Sep. 2016 - Dec. 2016
    College of Engineering, University of Michigan, Ann Arbor

    Supervised By: Professor Satinder Singh Baveja

  • Read papers in hierarchical reinforcement learning and planning
  • Developed novel agents that can act and plan at different temporal scales
  • Wrote a report about findings
  • Report Draft


    EECS 445 (Undergraduate Machine Learning) Instructional Aide

    Sep. 2016 - Dec. 2016
    College of Engineering, University of Michigan, Ann Arbor

  • Constructed programming assignments for homework and hands-on lecture sections
  • Taught weekly discussion sections and hands-on lecture sections
  • Managed Piazza student-instructor question/answer forum.
  • Course Notes: Jupyter Notebook Lecture Slides

    Neural Networks/Deep Learning Homework Assignment: Assignment document , Skeleton code Zipfile , Solution Zipfile


    Research Intern

    May. 2016 - Aug. 2016
    Microsoft Research, Cambridge, UK

  • Helped with developing the Minecraft AI Platform, Project Malmö, with a focus on building a range of tasks
  • Implemented a variety of RL agents (primarily in TensorFlow and Chainer), from traditional RL agents like SARSA-λ to more complex state-of-the-art Deep RL agents like Dueling DQN
  • Looked at generalization performance of agents using a difficulty metrics based approach
  • Resources: Website on Project Malmo GitHub page


    Projects

    Some projects that have arisen out of side hobbies and past work.

    Modeling Serotonergic Neurons in the DRN

    Studied depression and reward processes through computer models for Serotonergic Neurons in the Dorsal Raphe Nuclei.

    Resources: Paper , Zipfile with code


    Build Battle Mod Creation Tutorial

    Created a Java + MinecraftForge + Project Malmo tutorial to create a simple task in the Minecraft AI Platform, Project Malmo, where an agent is to copy a given structure using build, break, and move actions.

    Resources: Website on Project Malmo GitHub page


    Backpropagation Multilayer Neural Networks in Python w/ NumPy

    Designed EECS 445 (Machine Learning) Homework Assignment to implement Backpropagation Neural Networks.

    Resources: Assignment document , Skeleton code Zipfile , Solution Zipfile


    Anomaly Detection in Controller Area Networks

    Initial look to Machine Learning techniques, namely One-Class SVMs and LSTMs, to devise methods of detecting malicious messages passing through primary vehicle computer systems (the CAN bus in particular).

    Resources: Paper


    Incentivizing Exploration with Denoising Auto-Encoders for learning to play Atari Games

    Developed a Deep-RL agent that performed better than previous state-of-the-art agents in environments with stochasticity through minimization of error in next state prediction, given ("noised" state, action taken) pairs.

    Resources: Paper


    Search engine keyword popularity prediction

    Created a systems dynamics model of a large network in conjunction with real world statistics obtained from Google Analytics to identify common behavior patterns that lead to keyword popularity increase and decrease.

    Resources: Paper


    Exploring the Relationship Between Social Network Structure and Individual Preferences

    Extended on Schelling's experiment using computer models to explain how social groups are formed and how the preferences of people affect underlying social network structure.

    Resources: Paper


    Sociopolitical Opinion Analysis

    Used different complex systems models such as Voronoi maps to find social and political scenarios where the opinions of a group do better than the opinion of a few experts.

    Resources: Paper (Q&A Style) , Paper (Extended Study)


    Skills & Proficiency

    Python

    NumPy

    Matplotlib

    LaTeX

    Markdown

    TensorFlow

    Chainer

    SciKit-Learn

    Java

    C

    C++

    MATLAB

    C#

    XML

    Keras

    SciPy

    NetLogo

    Unity

    Lua

    Torch

    Objective-C, Swift

    HTML, CSS, JS

    R

    Verilog