Sunday, April 23, 2017

Machine learning 10 - Funny pictures

The following are funny pictures related to machine learning or data science I found online. It is a great way to learn some concepts in this way. Also, it is nice to use some of them in your talk so that students can learn something in a very relaxed environment. Hope you also enjoy them! 

Torture your data

jpg

Big data is like teenage sex

jpg

No trust of large uncertainty

png

Correlation does not imply causation

jpg

Type I and Type II error

jpg

Out lier!

jpg

Machine learning protests

jpg

Different views of machine learning

png

Hisotry of life

jpg

Your product vs. apple and google

png

Good code vs. bad code

jpg

How deos overfitting looks like?

jpg

Developers are born brave

jpg

It never rains in the Bay Area

jpg

Deep learning is easy

jpg

Interview a data scientist

jpg

Change your habit

jpg

Label the axis

jpg

Information, Knowledge, and Creativity

jpg

The deep learning Saga

Acknowledgements

All the figures are from internet, I thank all the authors. I found a lot of the pictures from the following links. 

Sunday, April 16, 2017

Wife's painting: Tar sands construction site

Trump issued the permit for Keystone Oil Pipeline on March 24th. I remembered my wife went to protest against the same program in 2013 when Obama was the president. My wife was really upset about the approvement, and she paints the following as a way to express her feeling. You can check out all my wife's paintings.
jpg
This painting is about the oil sands/tar sands mining. I got the inspiration after watching the documentary movie – “Before the Flood.”. I think everyone should watch this movie, especially American, because it not only discusses the global warming and climate change but also talks about what is happening right now in the U.S. This movie is so touching for me because I think it touches the essence of the issue. I never doubt about the global warming and climate change, but I'm so curious about why still so many people don't believe that. In a country having the most advanced technology, the U.S. still has lots of people don't believe science and a person who doesn't believe science can become this country's president. This fact is more unbelievable for me than the global warming and climate change. And this movie gives its answer to people like me. What's more, the movie tries to give some suggestions like what we can do to help protect the environment as individuals. I think it’s so important because it gives people some hope, especially in a time when we're so disappointed about the government's policies on environment issues. 
In the painting, the most fearful thing for me is that you thought this is surreal and then you realize it is real. This painting is about the oil sands/tar sands construction site in Alberta, Canada. The terrible fact is how human destroy our environment is beyond our imagination. Recently, Trump approved a permit for keystone Oil Pipeline Project, which will carry oil from Canada to the Gulf Coast. And this Keystone Oil Pipeline just begins here – the oil sands/tar sands mining construction site in Alberta, Canada. We should know that producing oil from oil sands/tar sands is one of the most destructive ways to get oil. First, they clear all the trees from the land, then they scrape away the shallow layer of topsoil. Also, during its process, they need to use a lot of fresh water to separate oil from sands, causing serious water pollution. A report by University of Alberta said there were thousands of birds dead in 2010 because they landed on the toxic waste ponds. So, this painting is a real scene, showing how oil sands/tar sands industry is destroying our lovely nature. And the green in distance reminds us this area’s ecological-environment before oil sands/tar sands mining.
You can see the real tar sands site below:

Saturday, April 8, 2017

Wife's painting: Life on mars

The following painting is another one from my wife, it is surrealism. You can see the descriptions below from my wife and what she wants to express in this painting. Hope it can also make us think how we can do by ourselves to protect our planet. You can check out all my wife's paintings.
jpg
Consider all environmental issues we’re facing now, such as climate change and global warming, some people already think about moving human to Mars rather than dealing with the problems on the earth. Recently, Xspace rocket company released their multiplanet traveling plan, hoping to move about thousands of people to Mars by 2040. It’s a cool idea, but I don’t think it’s a good idea, especially seeing people so excited and crazy about moving to Mars. As we know, Mars is not our heaven; it has much harder and more extreme weather than our earth. There is nothing on Mars – absolutely nothing. Its temperature is from 27C during the day to -143C at night. It’s a real hell rather than a heaven. Also, if the human doesn't change their greedy and selfish nature, we still must face the same problems someday, no matter where we are.
This painting tries to show the picture when people finally move to Mars and build their cities. But it’s not an exciting picture. The city is an inanimate concrete forest. It looks like tombstone rather than a real city. It is the graves for all plants and animals which have become extinct from our earth. There is a man standing in the foreground and looking at the distant earth. He misses it, missing its colorful and vigorous life; he wants to come back, coming back to its glorious past. But who is he? We don’t know. And I really doubt whether the human can move to Mars before we total destroy our environment and our mother earth. So, it’s empty in the suite because we don’t know whether we, human beings, can really survive to see that day happen in the future. Furthermore, in this painting. Animals and plants are kept in glass cans and hung from the sky – they are the gifts from the heaven. When our environment is not suitable to support life anymore, we hope at least we can find a way to preserve them and someday they can revive in our future homeland.

Friday, April 7, 2017

Entrepreneur training 7: Business communications

Business communications

This week, we discussed presentations, emails and meetings in business. I think it is really useful, a lot of the skills are not only in the business settings. 
Presentations
  1. Set the context. This is slide zero (what is the question, why I am here, the theme, the context setting of the presentation)
  2. Set the expectation. This should be saying the desired outcome. 
Running effective meeting
  • Starts why we are here and ask how to make a great meeting 
  • Agree on objective & the method
  • Leader's job
    • Purpose of the meeting
    • Set the high bar of the meeting
    • Make sure people feel heard
    • Ask & invite opinions
  • Who is making a decision & how
  • Be careful to people spend too much time distract the meeting to other directions.
What’s CEO’s role? the leader’s job is to drive consensus
When you look for jobs in the future, and the most important thing is whether their culture aligns with you. The second factor is how many hours you need commute.
Pitching to investors
  • Objectives are more clear
  • Meeting flow is expected (1 hour)
    • Ice breaker
    • Pitch deck (no longer than 20 min, 1-slide, 3-slide, 5-slide, 10-slide versions)
    • Q&A (20-30 min)
    • Wrap up & next steps (if any)
  • How do you know if the meeting went well
Here is a nice video by David Rose - 10 things to know before you pitch a VC
Common Mistakes
  • Over-explaining the obvious
    • And not explain what really needs to be explained
  • Rushing through slide zero (There are only two slides really important, slide 0, and the 2nd one is the Financial projection)
  • Skipping proper transitions
  • Not having a team slide with photos & relevance
Remember the purpose of the first date is to get the 2nd date. The first VC meeting is to intrigue them to the 2nd meeting. 
The procedure from meeting with VC to get the money is usually 5 - 8 months
Process: from wink to first meeting is about 2 - 6 weeks, the first meeting to the talk with expert, is about 2 - 8 weeks. This is also want to see if you can doing good. 2 - 4 weeks for the term sheet to the legal due diligence.
  • wink - (1 paragraph email)
  • request a meeting (send 2 page executive summary, don’t send the slides)
  • First meeting try to wow them, and meeting went well, they will stop, and ask, let me see if someone is available, they will never see they say no
  • 2nd meeting with more people (they try to get the consensus between different )
  • Now you should talk with our expert number (this may follow due diligence)
  • If you doing really good, then they will ask you partner meeting (the meeting location is very important, whether it is at a coffee shop, or first floor of the company, or the top floor conference room, etc. 
  • Then you will get a term sheet, 4-8 pages, with what they want to invest, no binding offer, and you have 72 hours to signing it. (this is a brief joy moment you can have before click emails)
  • Now starts the Legal due diligence, this is more lawyer to lawyer, you pay for both sides.
  • Closing
  • about 48 hours or 3 days, you get money from VCs
When you have multi-horses race, or rebound (when the VCs rejected by a similar startup), then the process will be compressed. 
Following up with investors
  • send additional info based on discussion
  • see if they pull or only you are pushing?
Usually for a VC, they got 1000 applications -> 100 closely investigate -> 20 invite to present -> 3 or 4 offers -> 1 - 2 investment per year
Emails
  • Using signatures
  • Salutation
  • Email title (subject line)
  • Suitable length
  • Sections & organization of email body
  • First line, why I am sending you this email
Pitching to Investors
Tell a story
  • saw a problem
    • Unique insights led to a thesis
    • market research validated thesis
    • assembled a team
    • know our customers & their problem & our competition
  • We know how to promote this, we have our assumptions & financial projections
  • We need XX money to get to YY stage
We talked a lot in the past about getting money from VC (Venture Capital), in order better understand them, we need to figure out the following 3 questions:
  • How do they get their money?
  • Who are they accountable to?
  • How do they make money?
Approaching investors
  • Send a short email if they are interested in this space [ideally through a referral]
  • Send executive summary to gauge interest and request a meeting
  • First F2F meeting to intrigue them ... Get a 2nd meeting!
Structure of Exec Summary
  1. What problem exists (unmet need) & who has it
  2. Current alternatives & shortcomings
  3. Your solution
  4. Your unfair advantage (magic sauce)
  5. Positioning vis-a-vis other competitors
  6. Market segment targeted & market size
  7. Business model
  8. G2M strategy
  9. Team
  10. Timeline of progress & milestones
  11. Financial projects

Acknowledgements:

All the materials are from the entrepreneurship class at UC Berkeley taught by Naeem Zafar.

Sunday, April 2, 2017

Enable GPU on MacBook Pro for Deep Learning

Recently, I am trying to experiment some deep learning models on my Macbook. I want to enable the GPU support on my Macbook Pro, since it can train the model faster. I am currently using Keras on top of Theano backend. Here I document how I did it, hope it will also useful for you. 

First check the specification

This is the Graphics on my Macbook Pro (Mid 2014 model), with OS X Yosemite 10.10.5. This step is mainly for you to get a sense what food you have before cooking ^)^ You can find the information from 'System Preferences' or using the following command to find it out. For this purpose, NVIDIA GeForce GT 750M is what I am looking for. 
$system_profiler | grep -A35 Graphics/Displays
Graphics/Displays:

    Intel Iris Pro:

      Chipset Model: Intel Iris Pro
      Type: GPU
      Bus: Built-In
      VRAM (Dynamic, Max): 1536 MB
      Vendor: Intel (0x8086)
      Device ID: 0x0d26
      Revision ID: 0x0008
      gMux Version: 4.0.8 [3.2.8]

    NVIDIA GeForce GT 750M:

      Chipset Model: NVIDIA GeForce GT 750M
      Type: GPU
      Bus: PCIe
      PCIe Lane Width: x8
      VRAM (Total): 2048 MB
      Vendor: NVIDIA (0x10de)
      Device ID: 0x0fe9
      Revision ID: 0x00a2
      ROM Revision: 3776
      gMux Version: 4.0.8 [3.2.8]

Install CUDA

CUDA is a parallel computing platform and application programming interface model created by Nvidia. To enable the capability on Mac, we need install the driver and toolkit from NVIDIA. Do download the one corresponding to your operating system. At the time of writing, my Macbook Pro is running 10.10.5 which is an older version of operating system, therefore, I need to download the corresponding CUDA from the Archive
After installing the correct version of CUDA, we can verify the install by run some samples. 
$cd /usr/local/cuda/samples
$sudo make -C 1_Utilities/deviceQuery
$./bin/x86_64/darwin/release/deviceQuery

## output
./bin/x86_64/darwin/release/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GT 750M"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 2048 MBytes (2147024896 bytes)
  ( 2) Multiprocessors, (192) CUDA Cores/MP:     384 CUDA Cores
  GPU Max Clock rate:                            926 MHz (0.93 GHz)
  Memory Clock rate:                             2508 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = GeForce GT 750M
Result = PASS
If you see the output as mine, that means you successfully installed it. (Note that, if you choose the wrong version, you will likely have problems, especially if you install cuda directly from brew using 'brew cask info cuda', be sure which version you installed). 

Install NVIDIA's cuDNN library

NVIDIA cuDNN is a GPU-accelerated library of primitives for deep neural networks. NVIDIA requires you to sign up to download packages, therefore, first sign up here:
https://developer.nvidia.com/accelerated-computing-developer
Download cuDNN (choose the one corresponding to your cuda version):
https://developer.nvidia.com/cudnn
$tar zxvf ~/Downloads/cudnn-7.5-osx-x64-v6.0tgz
$sudo cp ./cuda/include/cudnn.h /usr/local/cuda/include/
$sudo cp ./cuda/lib/libcudnn* /usr/local/cuda/lib/
Add the following library path to the bottom of .bashrc or .bash_profile file:
export PATH=/usr/local/cuda/bin:$PATH  
export DYLD_LIBRARY_PATH="/usr/local/cuda/lib":$DYLD_LIBRARY_PATH

Install pygpu

The last step to enable GPU on your mac is to install pygpu. Using conda to install it and all the dependencies. 
$conda install pygpu

Config Theano to use GPU

Add the following to your .theanorc config file in home directory (vi ~/.theanorc), device = cuda is telling theano to use GPU instead of CPU, this will make the default choice is GPU. 
[global]
device = cuda
floatX = float32

Test

All right, now we should be able to use GPU on Macbook Pro. Let's do a simple test. In a terminal, open ipython, and import theano, you should see something similar to the following. (Don't worry about the warning, it is just saying I am using a higher version of cuDNN than 5.1). 
In [1]: import theano
/Users/qingkaikong/miniconda2/lib/python2.7/site-packages/theano/gpuarray/dnn.py:135: UserWarning: Your cuDNN version is more recent than Theano. If you encounter problems, try updating Theano or downgrading cuDNN to version 5.1.
  warnings.warn("Your cuDNN version is more recent than "
Using cuDNN version 6020 on context None
Mapped name None to device cuda: GeForce GT 750M (0000:01:00.0)

Let's see the speed gain

Let's run the imdb_cnn.py from the examples in the keras repo. You can use the THEANO_FLAGS=device=cpu or THEANO_FLAGS=device=cuda0 to control running with cpu or gpu in the command line. 
CPU version
$THEANO_FLAGS=device=cpu python imdb_cnn.py 
## Output
Using Theano backend.
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 400)
x_test shape: (25000, 400)
Build model...
Train on 25000 samples, validate on 25000 samples
Epoch 1/2
25000/25000 [==============================] - 148s - loss: 0.4157 - acc: 0.7978 - val_loss: 0.2956 - val_acc: 0.8768
Epoch 2/2
25000/25000 [==============================] - 149s - loss: 0.2483 - acc: 0.9000 - val_loss: 0.2773 - val_acc: 0.8857
GPU version
$THEANO_FLAGS=device=cuda0 python imdb_cnn.py 
## Output
Using Theano backend.
/Users/qingkaikong/miniconda2/lib/python2.7/site-packages/theano/gpuarray/dnn.py:135: UserWarning: Your cuDNN version is more recent than Theano. If you encounter problems, try updating Theano or downgrading cuDNN to version 5.1.
  warnings.warn("Your cuDNN version is more recent than "
Using cuDNN version 6020 on context None
^[bMapped name None to device cuda0: GeForce GT 750M (0000:01:00.0)
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
x_train shape: (25000, 400)
x_test shape: (25000, 400)
Build model...
Train on 25000 samples, validate on 25000 samples
Epoch 1/2
25000/25000 [==============================] - 58s - loss: 0.4164 - acc: 0.7956 - val_loss: 0.2969 - val_acc: 0.8752
Epoch 2/2
25000/25000 [==============================] - 56s - loss: 0.2488 - acc: 0.8987 - val_loss: 0.2852 - val_acc: 0.8823

Conclusion

We can see the GPU version is about 3 times faster than the CPU version on my Macbook Pro, which is a little disappointed (I was expecting more speed up when training deep learning model on GPU). I tested on a different dataset with a much deeper structure, it seems the gain is about the same, 3 times faster. It is better than nothing ^)^

Why GPU is faster than CPU (in some cases)

GPUs is best for data parallel tasks which means that it maybe possible to make an algorithm run faster on a GPU if you can run the same computations on different pieces of data. GPUs are built on SIMD (Single Instruction Multiple Data) architecture. This means each streaming multiprocessor (SMP) (each GPU has multiple SMPs) can execute only one instruction on all the threads at a time but on potentially different data. 
This means GPUs are much faster for some tasks, such as graphics processing, linear algebra, video encoding, Monte Carlo Simulation, multiplying matrices for a machine learning algorithm, or powering a database. 
But GPUs get their speed for a cost. Usually, a single GPU core is much slower than a single CPU core to execute one instruction. But GPU has several cores (up to 16) each operating in a 32-wide SIMD mode (this is 16 * 32 = 496), which can execute the same instruction on about 500 pieces of data. In contrast, common CPUs only have up to 4 or 8 cores, and can operate in 4-wide SIMD which gives much lower parallelism. Even though GPUs can do more operations at a time, but it takes longer time than a CPU to execute the same instruction. The speed gain only comes from this same instruction is applied on multiple data at the same time. Therefore, if you have sequential tasks, CPU would be a better choice than GPUs. 
Computations on GPUs involve an additional overhead as compared to CPUs in the data copy required from the main memory to GPU memory before any computation can be done. Thus it needs to be evaluated if data copy would not make the GPU implementation slower than the actual CPU implementation.

What Can be Accelerated on the GPU (ref)

The performance characteristics will of course vary from device to device, and also as we refine our implementation:
  • In general, matrix multiplication, convolution, and large element-wise operations can be accelerated a lot (5-50x) when arguments are large enough to keep 30 processors busy.
  • Indexing, dimension-shuffling and constant-time reshaping will be equally fast on GPU as on CPU.
  • Summation over rows/columns of tensors can be a little slower on the GPU than on the CPU.
  • Copying of large quantities of data to and from a device is relatively slow, and often cancels most of the advantage of one or two accelerated functions on that data. Getting GPU performance largely hinges on making data transfer to the device pay off.

References

I thank the authors from the following links. Note that for the first two links, they are using tensorflow directly and I am using keras on top of Theano.