Research/Blog
Meeting Minutes from AI Lab session on Saturday 21st Sep in Bengaluru
- September 24, 2019
- Posted by: vsinghal
- Category: Computer Vision Deep Learning Generative Modeling Reinforcement Learning
#CellStratAILab #disrupt4.0 #WeCreateAISuperstars
We had fantastic presentations on advanced Deep Learning concepts at the last Saturday AI Lab.
![](http://www.cellstrat.com/wp-content/uploads/2019/09/Collage-3-1024x768.jpg)
Reinforcement Learning (RL) with Dynamic Programming :
First Shubha M. started with a superb session on RL with Dynamic Programming. Dynamic Programming is a concept of breaking a problem into subproblems, solving them and then combining the solution to solve the overall problem.
RL involves Markov Decision Processes (MDP). This is the concept of stochastic processes where each state is independent of historical steps. Moving between MDP states entails action choices and corresponding rewards; each state transition has a probability associated with it. The optimal policy involves arriving at ideal state-action pairs for reaching the destination. The optimal policy is discovered with help of the Bellman Optimality equation which states that the value of a state is equal to the sum of immediate state rewards and discounted future rewards.
Dynamic Programming helps solve the Bellman Optimality equation. This involves Iterative Policy Evaluation, where we update the value of each state iteratively by following a Random Policy for state movement. Then optimal policy is derived by greedily following the value maximization path. This process is called Policy Improvement.
AutoEncoders :
I (Vivek Singhal) and Gouthaman Asokan presented a session on AutoEncoders, which are Unsupervised Learning algorithms used to discover latent or hidden features of data that represent the most critical aspects of this data. These algorithms are very useful for cleaning images or other forms of data, unsupervised pre-training of lower layers of deep neural networks, dimensionality reduction or de-noising data in general.
AutoEncoders work by trying to reconstruct the input in the output; in this sense they represent Identity Functions. However one restricts the identity copyover from input to the output by constraining the hidden layers in some way, otherwise no useful features will be learnt in an Identity function.
AutoEncoders can be Undercomplete or Overcomplete. Undercomplete Autoencoders are trained by restricting the middle layers of a CNN to have a smaller dimension spatially. This forces the hidden layers to discover the most salient aspects of the data. In this system, we do not use activation functions after each neural layer, as we need to retain the data’s valuation in the hidden or latent layer.
Overcomplete AutoEncoders have higher dimensional latent layers in between and these need to be regularized in order to discover the latent features. Overcomplete AutoEncoders include Sparse AutoEncoders where we regularize the neural net by adding a sparsity term to the loss function or indulging in dropout. Denoising autoencoders learn latent features by doing reconstruction of input which has been enhanced by noise. Variational AutoEncoders (VAE) use probabilistic modeling. First the Encoder discovers the Mean and Standard Deviation coding of the input. The Hidden Layer samples data from the Gaussian Distribution represented by this Mean and Standard Deviation. The Decoder again uses probabilistic modelling on the Latent layer to develop the output. VAE are also powerful generative models due to their ability to generate similar samples as the input.
Object Detection with Single Shot Detector (SSD) :
Finally, Niraj Kale presented a deep session on Object Detection in Images with SSD algorithm. The key idea here is to detect the objects in a single pass of images from the network. This algorithm involves use of several bounding boxes in the image. Different bounding box predictions (for objects in the image) are achieved by having the later layers of the network predict progressively smaller bounding boxes. The final prediction is the union of all these predictions. At the end, the MultiBox only retains the top K predictions that have minimised both location (LOC) and confidence (CONF) losses.
Experience the AI revolution unfolding right here in Bengaluru. Join us this Saturday (28th Sep 2019) at our AI Lab meetup at our Infantry Rd or Bellandur locations. Register below :-
Infantry Rd AI Lab (Saturday 28th Sep) :-
Register : https://www.meetup.com/Disrupt-4-0/events/264405791/
Topic : Hands-on Workshop on GANs and Pix2pix GANs
Loc. : WeWork, Prestige Central, Infantry Road, Shivaji Nagar, BLR
Presenters : Vivek Singhal
Bellandur AI Lab (Saturday 28th Sep) :-
Register : https://www.meetup.com/Disrupt-4-0/events/srkkgryzmblc/
Topic : Hands-on Workshop on GANs and Pix2pix GANs
Loc. : WeWork, Embassy Tech Village, ORR, BLR
Presenters : Shreyas Jagannath
See you this weekend for the AI Lab workshop ! Let’s put India, and Bengaluru, on the global AI map !
Questions ? Call me at +91-9742800566 !
Best Regards,
Vivek Singhal
Co-Founder & Chief Data Scientist, CellStrat
+91-9742800566