Author Archives: lukesy

Deep Learning Chapter 9 Convolutional Networks

Brief outline below (more of personal guide actually): Read from link.

  1. Convolution Operation Description
    1. Cross Correlation
  2. Why Convolution?
    1. Sparse Interaction
    2. Parameter Sharing
    3. Equivariant Representation
  3. Conv Nets Operation
    1. Convolution
    2. Detector (Nonlinear Function)
    3. Pooling – adding strong prior that the function the layer learns must be invariant to small translations.
  4. Convolution may imply an infinitely strong prior that weights is shared among neighbors and that far edges have 0 weights. This prior makes sense if the feature is equivariant to translation.
  5. Variants of Convolution
    1. 1 kernel = 1 kind of feature. Usually use many kinds of kernel.
    2. downsampling (stride)
    3. border – zero padding
      1. valid convolution
      2. same convolution
      3. full convolution
    4. locally connected layers / unshared convolution
    5. tiled convolution
  6. Structured Output
    1. classification
    2. Tensor
  7. Data Types – can process inputs of varying spatial extents (contains varying amount of observation of the same kind of thing, not optionally contain varying amounts of observation)
  8. Efficient convolution algorithms – If the kernel is “separable”, a much more efficient approach can be used.
  9. We can use the following to train our convolutional network
    1. Random
    2. Greedy layer wise pre-training
    3. Unsupervised learning
  10. Neuroscience basic for conv nets
    1. Gabor Functions
  11. History – In a way, conv nets paved the way to the general acceptance of neural networks.
Advertisements

Deep Learning Chapter 6 Deep Feedforward Networks

Brief outline below. Read from link.

  1. Cost Function
    1. Maximum Log Likelihood (cross entropy)
    2. Minimum Square Error
    3. Minimum Absolute Error
  2. Output Units
    1. Linear
    2. Sigmoid + Maximum Log Likelihood
    3. Softmax + Maximum Log Likelihood (multivariate)
    4. Gaussian Mixture
  3. Hidden Units
    1. Rectified Linear Unit
      1. Absolute Value Rectification
      2. leaky ReLU
      3. parametric ReLU (PReLU)
      4. Maxout units
    2.  Sigmoid Units
      1. Logistic Sigmoid
      2. Tanh
    3.  Others
      1. None
      2. Softmax
      3. RBF
      4. Softplus
      5. Hard tanh
  4. Architecture Design
    1. Depth vs Width (exponential)
    2. Connection between layers
  5. Back Propagation
    1. Might need to implement one myself to truly understand this
  6. History

Deep Learning Chapter 4 Numerical Computation Questions

After reading and digesting Chapter 4 (link), I aggregated the following questions to test my comprehension. I’ll post the answer to the questions when I review them.

  1. Define the underflow and overflow problem.
  2. For example, how can you modify softmax to evade the underflow and overflow problem?
  3. Define condition number.
  4. Define poor conditioning.
  5. Define the function you are trying to optimize in a gradient based optimization.
  6. Define the following:
    1. critical points
    2. stationary points
    3. local maximum
    4. local minimum
    5. saddle points
  7. Define partial derivations and gradients.
  8. Define directiona derivatives.
  9. Define the Jacobian matrix.
  10. Define the Hessian matrix.
  11. Define issues with Hessian matrix with poor conditioning.
  12. Define first order optimization algorithms, second order optimization algorithms.
  13. Define Lipschitz constant and its significance.
  14. Define Convex optimization algorithms
  15. Define constrained optimization and 3 approaches you can solve it.
  16. Define Karush Kuhn Tucker (KKT).

Deep Learning Chapter 03 Probability and Information Theory Guide Questions

After reading and digesting Chapter 3 (link), I aggregated the following questions to test my comprehension. I’ll post the answer to the questions when I review them.

A. Probability

  1. What is the purpose of probability theory?
  2. What are its two uses in Deep Learning?
  3. Why probability in ML?
  4. What are the three possible sources of uncertainly?
  5. Is it always better to use “complex and certain rules” than “simple and uncertain rules”?
  6. What is Frequentist probability?
  7. What is Bayesian probability?
  8. What is a random variable?
  9. A random variable can be __ and __ ?
  10. What is a probability distribution?
  11. What is a probability mass function?
  12. What is a joint probability distribution?
  13. What are the 3 properties that a probability mass function must satisfy?
  14. What is a probability density function?
  15. What are the 3 properties that a probability density function must satisfy?
  16. Define marginal probability and its key equation (also known as the sum rule).
  17. Define conditional probability and its key equation.
  18. Define intervention query and causal modeling.
  19. Define the chain rule of conditional probabilities.
  20. Define independence and conditional independence.
  21. Define the formula for expectation (for both discrete and continuous).
  22. Define variance and standard deviation.
  23. Define covariance and correlation.
  24. How is independence and covariance related?
  25. Define the covariance matrix?
  26. Define a Bernoulli Distribution.
  27. Define a Multinoulli Distribution.
  28. Define a Gaussian distribution.
  29. Define a Normal distribution.
  30. What is precision in the Gaussian distribution?
  31. In absence of prior knowledge, why is normal distribution a good default choice (2 reasons)?
  32. Define a multivariate normal distribution.
  33. Define an Exponential distribution.
  34. Define a Laplace distribution.
  35. Define a Dirac distribution.
  36. Define an Empirical distribution.
  37. Is dirac delta function a generalized function?
  38. Is Dirac delta distribution necessary to define empirical distribution over discrete variables?
  39. Define a Mixture distribution.
  40. Define a Latent variable.
  41. Define a Gaussian Mixture Model and explain why is called a universal approximator.
  42. Explain what are prior and posterior probabilities.
  43. Define Bayes rule
  44. Define briefly measure theory, measure zero, and almost everywhere.
  45. When handling two continuous random variables that are related by a deterministic function, what should be careful about (specifically, how does it affect the domain space of the two continuous random variables)?
  46. What equation relates the two variables? What is the equation in higher dimensions?

B. Common Functions

  1. Define a logistic sigmoid (including where does it saturate).
  2. Define a softplus function (including its range).
  3. Define a logit in statistics.
  4. Note about the math properties of these common functions (see the book).

C. Information Theory

  1. Define Information Theory. What is the basic intuition behind it?
  2. Define self-information. Explain the unit nat, bit, and shannon.
  3. What is Shannon entropy?
  4. What is Differential entropy?
  5. Define the Kullback-Leibler (KL) divergence.
  6. Is KL divergence symmetric? is it non negative?
  7. Define cross entropy.
  8. How is cross entropy similar to KL divergence?
  9. What is “0 log 0”?
  10. Define a structured probabilistic model.
  11. Define a graphical model.
  12. What is the main equation for a Directed model?
  13. What is the main equation for a Undirected model? What is a clique?
  14. Can a probability distribution be classified to Directed and Undirected models?

Note to self: after reading the math taught in this chapter, I realized that many of the things I did not understand before suddenly started to make sense. I know I still need to study a lot of stuff, but this just got me really excited after seeing how math enables and serves as a language and framework of machine learning.

Research is only useful when it is shared, teaching provides the best opportunity to share your research with the next generation of scientists.

  1. Be strict about preparation times for teaching material. Set a time allocation, (e.g. 3 hrs), and stick to it!
  2. Keep the learning objectives in mind, when writing lecture material. It will help you focus and cover the essentials.
  3. Remember you don’t have to talk for the full time.

Disclaimer: This is taken directly from a Mendeley article (link) about balancing research and teaching. I felt that reflecting upon its key points is essential to building in me the right character and principle in my path to becoming a researcher.

Beginning Deep Learning

Deep learning has been a very hot topic lately. As part of my OMSCS Big Data for Healthcare class and PhD preparation, it seems like I also need to learn about “Deep Learning”. I did watch the Deep Learning videos from Udacity and I honestly believe that those videos are more than enough to give one an overview of Deep Learning. But to do more meaningful work in this area, I need a deeper understanding of “Deep Learning”. Hence, I started reading the popular “deep learning book“. Below is my internalization of chapter 1. Note that aside from my opinions, most of the contents below is just me retelling the contents of the book.

Looking at the categories of bodies of knowledge, Deep Learning is basically under Representation Learning which is under Machine Learning which is under Artificial Intelligence. From the data, representation learning learns simple representations of the big problem and combining these representations to make more accurate predictions.

3 Notable Phases of Deep Learning History:

  1. Cybernetics (1940s-1960s)
    1. Linear models were created, along with the discovery of its limitations, such as being unable to solve XORs.
  2. Connectionism (1980s-1990s)
    1. One main idea of connectionism is that a large number of computational units can achieve intelligence behavior when networked together (as inspired by our brain and the network of neurons it contains)
    2. Distributed representation – the idea that each input of a system should be represented by many features, and each feature should be involved in the representation of many possible inputs (reading the example in the book will make things clearer )
  3. Deep learning (2006-present)

Two neural perspectives for deep learning 

  1. the brain is a living example that intelligent behavior is possible, and a straightforward way to build intelligence is to reverse engineer the brain (which is easier said than done)
  2. assuming that machine learning models encapsulate a part of how our brain works, it becomes useful in shedding light to understanding the brain and the underlying principles of human intelligence.

In recent years, there are a lot of improvements in the field due to:

  1. Faster computers
  2. More data
    1. the models did not change much compared to the 1980s, what changed was the amount of data we used to train the models.
    2. Rough rule of thumb as of 2016:
      1. 5000 labeled examples per category = acceptable performance.
      2. 10 million labeled examples = match or exceed human performance.
  3. New techniques to enable deeper networks
    1. We have more computational resources to run much larger models today. Model size increases 2x every roughly 2.4 years.
    2. If we continue this track, we will probably reach the same number of neurons as humans by 2050s. Although a biological neuron may be more complicated than a computer neuron so doing an apple to apple comparison might be wrong.

Closing Thoughts

As data grows and as the expertise of AI increases, I believe it is important to think about how will this affect certain areas of my life and how should I act now in preparation for the future.

First blog post

This is my very first blog post.

I and my wife just moved to Sydney last year. We are still getting used to the lifestyle here, but by God’s grace, things have been doing well. Before coming here, we made the big decision of leaving our comfort zone to study abroad (and hopefully in the process be closer to doing what we believe is God’s plan for our lives). However, things did not go as planned so we got delayed for a few months, but now I think we are sort of back on track.

I am starting my PhD. in Australia at the later part of the year, and I have read from other posts (link) that starting a blog increases your chances of successfully finishing a PhD., which is why I am starting this blog. My friends know that I am not really keen with social media and writing. I personally think that I am bad at expressing myself, sharing my thoughts, and at telling stories. However, not because I am bad a something, it doesn’t mean that I should just cower in fear and stay being bad at it. As one of my favorite quotes says, “Courage is not the absence of fear, but doing the right thing in spite of one’s fear”. So here goes my first blog post!

Note to self: I hope I don’t become a “Mikka Bozu (三日坊主)”, which is a Japanese saying for those who always start something with intense passion, but loses interest quickly.