### OpenMPI and MPI4py installation instructions for Ubuntu 16.04.

The instructions are to install MPI4Py by building from source. MPI4Py requires OpenMPI or similar mpicc compiler wrapper. So let us first install OpenMPI. For that, download the installation files first.
wget https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.0.tar.gz

Extract the file and open it.
tar -zxf openmpi-3.1.0.tar.gz
cd openmpi-3.1.0/

Configure the installation files by running the following code:
./configure --prefix="/home/$USER/.openmpi"  Now, we can make and install the files. sudo make -j4 sudo make install  If there are no errors, OpenMPI is successfully installed. To get the env paths linked correctly, include the following in the .bashrc file. export PATH="$PATH:/home/$USER/.openmpi/bin" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/\$USER/.openmpi/lib/"

Run source ~/.bashrc to make the changes available in current session.
Confirm that OpenMPI is successfully installed run:
mpiexec -V

This will print you the version of MPI installed. Good job if you got that right !

## Install MPI4Py.

### Requirements

sudo apt-get install libopenmpi-dev python-dev python3-dev

wget https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-3.0.0.tar.gz

Extract the file and open it.
tar -zxf mpi4py-3.0.0.tar.gz
cd mpi4py-3.0.0/

Now we need to build and install the files. For this run these:
sudo python setup.py build
sudo python setup.py install

or use python3 if required.
sudo python3 setup.py build
sudo python3 setup.py install

If the build and install process went well, you have MPI4Py successfully installed on your system. Now let us confirm that. Run (in the mpi4py-3.0.0 folder):
mpiexec -n 5 python demo/helloworld.py

If the installation went well, you should see the Hello, World! output from 5 different nodes.

### Bracket Validator

You're working with an intern that keeps coming to you with JavaScript code that won't run because the braces, brackets, and parentheses are off. To save you both some time, you decide to write a braces/brackets/parentheses validator.
Let's say:
• '(', '{', '[' are called "openers."
• ')', '}', ']' are called "closers."
Write an efficient function that tells us whether or not an input string's openers and closers are properly nested.
Examples:
• "{ [ ] ( ) }" should return True
• "{ [ ( ] ) }" should return False
• "{ [ }" should return False
Simply making sure each opener has a corresponding closer is not enough—we must also confirm that they are correctly ordered.
For example, "{ [ ( ] ) }" should return False, even though each opener can be matched to a closer.
We can do this in $O(n)$ time and space. One iteration is all we need!

### Breakdown

We can use a greedy approach to walk through our string character by character, making sure the string validates "so far" until we reach the end.
What do we do when we find an opener or closer?
Well, we'll need to keep track of our openers so that we can confirm they get closed properly. What data structure should we use to store them? When choosing a data structure, we should start by deciding on the properties we want. In this case, we should figure out how we will want to retrieve our openers from the data structure! So next we need to know: what will we do when we find a closer?
Suppose we're in the middle of walking through our string, and we find our first closer:
  [ { ( ) ] . . . .
^

How do we know whether or not that closer in that position is valid?
A closer is valid if and only if it's the closer for the most recently seen, unclosed opener. In this case, '(' was seen most recently, so we know our closing ')' is valid.
So we want to store our openers in such a way that we can get the most recently added one quickly, and we can remove the most recently added one quickly (when it gets closed). Does this sound familiar?
What we need is a stack!

### Solution

We iterate through our string, making sure that:
1. each closer corresponds to the most recently seen, unclosed opener
2. every opener and closer is in a pair
We use a stack to keep track of the most recently seen, unclosed opener. And if the stack is ever empty when we come to a closer, we know that closer doesn't have an opener.
So as we iterate:
• If we see an opener, we push it onto the stack.
• If we see a closer, we check to see if it is the closer for the opener at the top of the stack. If it is, we pop from the stack. If it isn't, or if the stack is empty, we return False.
If we finish iterating and our stack is empty, we know every opener was properly closed.

def is_valid(code):
openers_to_closers = {
'(' : ')',
'{' : '}',
'[' : ']',
}
openers = set(openers_to_closers.keys())
closers = set(openers_to_closers.values())

openers_stack = []
for char in code:
if char in openers:
openers_stack.append(char)
elif char in closers:
if not openers_stack:
return False
else:
last_unclosed_opener = openers_stack.pop()
# If this closer doesn't correspond to the most recently
# seen unclosed opener, short-circuit, returning False
if not openers_to_closers[last_unclosed_opener] == char:
return False

return openers_stack == []
 

### Complexity

$O(n)$ time (one iteration through the string), and  space (in the worst case, all of our characters are openers, so we push them all onto the stack).

### Bonus

In Ruby, sometimes expressions are surrounded by vertical bars, "|like this|". Extend your validator to validate vertical bars. Careful: there's no difference between the "opener" and "closer" in this case—they're the same character!
 

### CUDA-9.0-CUDN-7.0 script

Most users know how to check the status of their CPUs, see how much system memory is free, or find out how much disk space is free. In contrast, keeping tabs on the health and status of GPUs has historically been more difficult. If you don’t know where to look, it can even be difficult to determine the type and capabilities of the GPUs in a system. Thankfully, NVIDIA’s latest hardware and software tools have made good improvements in this respect.

The tool is NVIDIA’s System Management Interface (nvidia-smi). Depending on the generation of your card, various levels of information can be gathered. Additionally, GPU configuration options (such as ECC memory capability) may be enabled and disabled.
As an aside, if you find that you’re having trouble getting your NVIDIA GPUs to run GPGPU code, nvidia-smi can be handy. For example, on some systems the proper NVIDIA devices in /dev are not created at boot. Running a simple nvidia-smi query as root will initialize all the cards and create the proper devices in /dev. Other times, it’s just useful to make sure all the GPU cards are visible and communicating properly. Here’s the default output from a recent version with four Tesla V100 GPU cards:
nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48                 Driver Version: 410.48                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000000:18:00.0 Off |                    0 |
| N/A   40C    P0    55W / 250W |  31194MiB / 32480MiB |     44%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-PCIE...  Off  | 00000000:3B:00.0 Off |                    0 |
| N/A   40C    P0    36W / 250W |  30884MiB / 32480MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-PCIE...  Off  | 00000000:86:00.0 Off |                    0 |
| N/A   41C    P0    39W / 250W |  30884MiB / 32480MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-PCIE...  Off  | 00000000:AF:00.0 Off |                    0 |
| N/A   39C    P0    37W / 250W |  30884MiB / 32480MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0    305892      C   /usr/bin/python                            31181MiB |
+-----------------------------------------------------------------------------+

## Persistence Mode

On Linux, you can set GPUs to persistence mode to keep the NVIDIA driver loaded even when no applications are accessing the cards. This is particularly useful when you have a series of short jobs running. Persistence mode uses a few more watts per idle GPU, but prevents the fairly long delays that occur each time a GPU application is started. It is also necessary if you’ve assigned specific clock speeds or power limits to the GPUs (as those changes are lost when the NVIDIA driver is unloaded). Enable persistence mode on all GPUS by running:
nvidia-smi -pm 1
On Windows, nvidia-smi is not able to set persistence mode. Instead, you need to set your computational GPUs to TCC mode. This should be done through NVIDIA’s graphical GPU device management panel.

## GPUs supported by nvidia-smi

NVIDIA’s SMI tool supports essentially any NVIDIA GPU released since the year 2011. These include the Tesla, Quadro, and GeForce devices from Fermi and higher architecture families (Kepler, Maxwell, Pascal, Volta, etc).
Supported products include:
Tesla: S1070, S2050, C1060, C2050/70, M2050/70/90, X2070/90, K10, K20, K20X, K40, K80, M40, P40, P100, V100
Quadro: 4000, 5000, 6000, 7000, M2070-Q, K-series, M-series, P-series, RTX-series
GeForce: varying levels of support, with fewer metrics available than on the Tesla and Quadro products

## Querying GPU Status

Microway’s GPU Test Drive cluster, which we provide as a benchmarking service to our customers, contains a group of NVIDIA’s latest Tesla GPUs. These are NVIDIA’s high-performance compute GPUs and provide a good deal of health and status information. The examples below are taken from this internal cluster.
To list all available NVIDIA devices, run:
nvidia-smi -L

GPU 0: Tesla K40m (UUID: GPU-d0e093a0-c3b3-f458-5a55-6eb69fxxxxxx)
GPU 1: Tesla K40m (UUID: GPU-d105b085-7239-3871-43ef-975ecaxxxxxx)
To list certain details about each GPU, try:
nvidia-smi --query-gpu=index,name,uuid,serial --format=csv

0, Tesla K40m, GPU-d0e093a0-c3b3-f458-5a55-6eb69fxxxxxx, 0323913xxxxxx
1, Tesla K40m, GPU-d105b085-7239-3871-43ef-975ecaxxxxxx, 0324214xxxxxx
To monitor overall GPU usage with 1-second update intervals:
nvidia-smi dmon

# gpu   pwr gtemp mtemp    sm   mem   enc   dec  mclk  pclk
# Idx     W     C     C     %     %     %     %   MHz   MHz
0    43    35     -     0     0     0     0  2505  1075
1    42    31     -    97     9     0     0  2505  1075
(in this example, one GPU is idle and one GPU has 97% of the CUDA sm "cores" in use)
To monitor per-process GPU usage with 1-second update intervals:
nvidia-smi pmon

# gpu        pid  type    sm   mem   enc   dec   command
# Idx          #   C/G     %     %     %     %   name
0      14835     C    45    15     0     0   python
1      14945     C    64    50     0     0   python
(in this case, two different python processes are running; one on each GPU)

## Monitoring and Managing GPU Boost

The GPU Boost feature which NVIDIA has included with more recent GPUs allows the GPU clocks to vary depending upon load (achieving maximum performance so long as power and thermal headroom are available). However, the amount of available headroom will vary by application (and even by input file!) so users and administrators should keep their eyes on the status of the GPUs.
A listing of available clock speeds can be shown for each GPU (in this case, the Tesla V100):
nvidia-smi -q -d SUPPORTED_CLOCKS

GPU 00000000:18:00.0
Supported Clocks
Memory                      : 877 MHz
Graphics                : 1380 MHz
Graphics                : 1372 MHz
Graphics                : 1365 MHz
Graphics                : 1357 MHz
Graphics                : 157 MHz
Graphics                : 150 MHz
Graphics                : 142 MHz
Graphics                : 135 MHz
As shown, the Tesla V100 GPU supports 167 different clock speeds (from 135 MHz to 1380 MHz). However, only one memory clock speed is supported (877 MHz). Some GPUs support two different memory clock speeds (one high speed and one power-saving speed). Typically, such GPUs only support a single GPU clock speed when the memory is in the power-saving speed (which is the idle GPU state). On all recent Tesla and Quadro GPUs, GPU Boost automatically manages these speeds and runs the clocks as fast as possible (within the thermal/power limits and any limits set by the administrator).
To review the current GPU clock speed, default clock speed, and maximum possible clock speed, run:
nvidia-smi -q -d CLOCK

GPU 00000000:18:00.0
Clocks
Graphics                    : 1230 MHz
SM                          : 1230 MHz
Memory                      : 877 MHz
Video                       : 1110 MHz
Applications Clocks
Graphics                    : 1230 MHz
Memory                      : 877 MHz
Default Applications Clocks
Graphics                    : 1230 MHz
Memory                      : 877 MHz
Max Clocks
Graphics                    : 1380 MHz
SM                          : 1380 MHz
Memory                      : 877 MHz
Video                       : 1237 MHz
Max Customer Boost Clocks
Graphics                    : 1380 MHz
SM Clock Samples
Duration                    : 0.01 sec
Number of Samples           : 4
Max                         : 1230 MHz
Min                         : 135 MHz
Avg                         : 944 MHz
Memory Clock Samples
Duration                    : 0.01 sec
Number of Samples           : 4
Max                         : 877 MHz
Min                         : 877 MHz
Avg                         : 877 MHz
Clock Policy
Auto Boost                  : N/A
Auto Boost Default          : N/A
Ideally, you’d like all clocks to be running at the highest speed all the time. However, this will not be possible for all applications. To review the current state of each GPU and any reasons for clock slowdowns, use the PERFORMANCE flag:
nvidia-smi -q -d PERFORMANCE

GPU 00000000:18:00.0
Performance State               : P0
Clocks Throttle Reasons
Idle                        : Not Active
Applications Clocks Setting : Not Active
SW Power Cap                : Not Active
HW Slowdown                 : Not Active
HW Thermal Slowdown     : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost                  : Not Active
SW Thermal Slowdown         : Not Active
Display Clock Setting       : Not Active
If any of the GPU clocks is running at a slower speed, one or more of the above Clocks Throttle Reasons will be marked as active. The most concerning condition would be if HW Slowdown was active, as this would most likely indicate a power or cooling issue. The remaining conditions typically indicate that the card is idle or has been manually set into a slower mode by a system administrator.

## Reviewing System/GPU Topology and NVLink with nvidia-smi

To properly take advantage of more advanced NVIDIA GPU features (such as GPU Direct), it is vital that the system topology be properly configured. The topology refers to how the various system devices (GPUs, InfiniBand HCAs, storage controllers, etc.) connect to each other and to the system’s CPUs. Certain topology types will reduce performance or even cause certain features to be unavailable. To help tackle such questions, nvidia-smi supports system topology and connectivity queries:
nvidia-smi topo --matrix

GPU0    GPU1    GPU2    GPU3    mlx4_0  CPU Affinity
GPU0     X      PIX     PHB     PHB     PHB     0-11
GPU1    PIX      X      PHB     PHB     PHB     0-11
GPU2    PHB     PHB      X      PIX     PHB     0-11
GPU3    PHB     PHB     PIX      X      PHB     0-11
mlx4_0  PHB     PHB     PHB     PHB      X

Legend:

X   = Self
SOC = Path traverses a socket-level link (e.g. QPI)
PHB = Path traverses a PCIe host bridge
PXB = Path traverses multiple PCIe internal switches
PIX = Path traverses a PCIe internal switch
Reviewing this section will take some getting used to, but can be very valuable. The above configuration shows two Tesla K80 GPUs and one Mellanox FDR InfiniBand HCA (mlx4_0) all connected to the first CPU of a server. Because the CPUs are 12-core Xeons, the topology tool recommends that jobs be assigned to the first 12 CPU cores (although this will vary by application).
Higher-complexity systems require additional care in examining their configuration and capabilities. Below is the output of nvidia-smi topology for the NVIDIA DGX-1 system, which includes two 20-core CPUs, eight NVLink-connected GPUs, and four Mellanox InfiniBand adapters:
 GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 mlx5_0 mlx5_2 mlx5_1 mlx5_3 CPU Affinity
GPU0  X  NV1 NV1 NV2 NV2 SYS SYS SYS PIX SYS PHB SYS 0-19,40-59
GPU1 NV1  X  NV2 NV1 SYS NV2 SYS SYS PIX SYS PHB SYS 0-19,40-59
GPU2 NV1 NV2  X  NV2 SYS SYS NV1 SYS PHB SYS PIX SYS 0-19,40-59
GPU3 NV2 NV1 NV2  X  SYS SYS SYS NV1 PHB SYS PIX SYS 0-19,40-59
GPU4 NV2 SYS SYS SYS  X  NV1 NV1 NV2 SYS PIX SYS PHB 20-39,60-79
GPU5 SYS NV2 SYS SYS NV1  X  NV2 NV1 SYS PIX SYS PHB 20-39,60-79
GPU6 SYS SYS NV1 SYS NV1 NV2  X  NV2 SYS PHB SYS PIX 20-39,60-79
GPU7 SYS SYS SYS NV1 NV2 NV1 NV2  X  SYS PHB SYS PIX 20-39,60-79
mlx5_0 PIX PIX PHB PHB SYS SYS SYS SYS  X  SYS PHB SYS
mlx5_2 SYS SYS SYS SYS PIX PIX PHB PHB SYS  X  SYS PHB
mlx5_1 PHB PHB PIX PIX SYS SYS SYS SYS PHB SYS  X  SYS
mlx5_3 SYS SYS SYS SYS PHB PHB PIX PIX SYS PHB SYS  X

Legend:

X    = Self
SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB  = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge)
PIX  = Connection traversing a single PCIe switch
NV#  = Connection traversing a bonded set of # NVLinks
The NVLink connections themselves can also be queried to ensure status, capability, and health. Readers are encouraged to consult NVIDIA documentation to better understand the specifics. Short summaries from nvidia-smi on DGX-1 are shown below.
nvidia-smi nvlink --status

GPU 0: Tesla V100-SXM2-32GB

[snip]

GPU 7: Tesla V100-SXM2-32GB
Link 5: 25.781 GB/s
nvidia-smi nvlink --capabilities

GPU 0: Tesla V100-SXM2-32GB
Link 0, P2P is supported: true
Link 0, P2P atomics supported: true
Link 0, System memory atomics supported: true
Link 0, SLI is supported: false

[snip]

Link 5, P2P is supported: true
Link 5, P2P atomics supported: true
Link 5, System memory atomics supported: true
Link 5, SLI is supported: false
Link 5, Link is supported: false
Get in touch with one of our HPC GPU experts if you have questions on these topics.

## Printing all GPU Details

To list all available data on a particular GPU, specify the ID of the card with -i. Here’s the output from an older Tesla GPU card:
nvidia-smi -i 0 -q

==============NVSMI LOG==============

Timestamp                       : Mon Dec  5 22:05:49 2011

Driver Version                  : 270.41.19

Attached GPUs                   : 2

GPU 0:2:0
Product Name                : Tesla M2090
Display Mode                : Disabled
Persistence Mode            : Disabled
Driver Model
Current                 : N/A
Pending                 : N/A
Serial Number               : 032251100xxxx
GPU UUID                    : GPU-2b1486407f70xxxx-98bdxxxx-660cxxxx-1d6cxxxx-9fbd7e7cd9bf55a7cfb2xxxx
Inforom Version
OEM Object              : 1.1
ECC Object              : 2.0
Power Management Object : 4.0
PCI
Bus                     : 2
Device                  : 0
Domain                  : 0
Device Id               : 109110DE
Bus Id                  : 0:2:0
Fan Speed                   : N/A
Memory Usage
Total                   : 5375 Mb
Used                    : 9 Mb
Free                    : 5365 Mb
Compute Mode                : Default
Utilization
Gpu                     : 0 %
Memory                  : 0 %
Ecc Mode
Current                 : Enabled
Pending                 : Enabled
ECC Errors
Volatile
Single Bit
Device Memory   : 0
Register File   : 0
L1 Cache        : 0
L2 Cache        : 0
Total           : 0
Double Bit
Device Memory   : 0
Register File   : 0
L1 Cache        : 0
L2 Cache        : 0
Total           : 0
Aggregate
Single Bit
Device Memory   : 0
Register File   : 0
L1 Cache        : 0
L2 Cache        : 0
Total           : 0
Double Bit
Device Memory   : 0
Register File   : 0
L1 Cache        : 0
L2 Cache        : 0
Total           : 0
Temperature
Gpu                     : N/A
Power State             : P12
Power Management        : Supported
Power Draw              : 31.57 W
Power Limit             : 225 W
Clocks
Graphics                : 50 MHz
SM                      : 100 MHz
Memory                  : 135 MHz
The above example shows an idle card. Here is an excerpt for a card running GPU-accelerated AMBER:
nvidia-smi -i 0 -q -d MEMORY,UTILIZATION,POWER,CLOCK,COMPUTE

==============NVSMI LOG==============

Timestamp                       : Mon Dec  5 22:32:00 2011

Driver Version                  : 270.41.19

Attached GPUs                   : 2

GPU 0:2:0
Memory Usage
Total                   : 5375 Mb
Used                    : 1904 Mb
Free                    : 3470 Mb
Compute Mode                : Default
Utilization
Gpu                     : 67 %
Memory                  : 42 %
Power State             : P0
Power Management        : Supported
Power Draw              : 109.83 W
Power Limit             : 225 W
Clocks
Graphics                : 650 MHz
SM                      : 1301 MHz
Memory                  : 1848 MHz
You’ll notice that unfortunately the earlier M-series passively-cooled Tesla GPUs do not report temperatures to nvidia-smi. More recent Quadro and Tesla GPUs support a greater quantity of metrics data:
==============NVSMI LOG==============
Timestamp                           : Mon Nov  5 14:50:59 2018
Driver Version                      : 410.48

Attached GPUs                       : 4
GPU 00000000:18:00.0
Product Name                    : Tesla V100-PCIE-32GB
Product Brand                   : Tesla
Display Mode                    : Enabled
Display Active                  : Disabled
Persistence Mode                : Disabled
Accounting Mode                 : Disabled
Accounting Mode Buffer Size     : 4000
Driver Model
Current                     : N/A
Pending                     : N/A
Serial Number                   : 032161808xxxx
GPU UUID                        : GPU-4965xxxx-79e3-7941-12cb-1dfe9c53xxxx
Minor Number                    : 0
VBIOS Version                   : 88.00.48.00.02
MultiGPU Board                  : No
Board ID                        : 0x1800
GPU Part Number                 : 900-2G500-0010-000
Inforom Version
Image Version               : G500.0202.00.02
OEM Object                  : 1.1
ECC Object                  : 5.0
Power Management Object     : N/A
GPU Operation Mode
Current                     : N/A
Pending                     : N/A
GPU Virtualization Mode
Virtualization mode         : None
IBMNPU
Relaxed Ordering Mode       : N/A
PCI
Bus                         : 0x18
Device                      : 0x00
Domain                      : 0x0000
Device Id                   : 0x1DB610DE
Bus Id                      : 00000000:18:00.0
Sub System Id               : 0x124A10DE
PCIe Generation
Max                 : 3
Current             : 3
Max                 : 16x
Current             : 16x
Bridge Chip
Type                    : N/A
Firmware                : N/A
Replays since reset         : 0
Tx Throughput               : 31000 KB/s
Rx Throughput               : 155000 KB/s
Fan Speed                       : N/A
Performance State               : P0
Clocks Throttle Reasons
Idle                        : Not Active
Applications Clocks Setting : Not Active
SW Power Cap                : Not Active
HW Slowdown                 : Not Active
HW Thermal Slowdown     : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost                  : Not Active
SW Thermal Slowdown         : Not Active
Display Clock Setting       : Not Active
FB Memory Usage
Total                       : 32480 MiB
Used                        : 31194 MiB
Free                        : 1286 MiB
BAR1 Memory Usage
Total                       : 32768 MiB
Used                        : 8 MiB
Free                        : 32760 MiB
Compute Mode                    : Default
Utilization
Gpu                         : 44 %
Memory                      : 4 %
Encoder                     : 0 %
Decoder                     : 0 %
Encoder Stats
Active Sessions             : 0
Average FPS                 : 0
Average Latency             : 0
FBC Stats
Active Sessions             : 0
Average FPS                 : 0
Average Latency             : 0
Ecc Mode
Current                     : Enabled
Pending                     : Enabled
ECC Errors
Volatile
Single Bit
Device Memory       : 0
Register File       : 0
L1 Cache            : 0
L2 Cache            : 0
Texture Memory      : N/A
Texture Shared      : N/A
CBU                 : N/A
Total               : 0
Double Bit
Device Memory       : 0
Register File       : 0
L1 Cache            : 0
L2 Cache            : 0
Texture Memory      : N/A
Texture Shared      : N/A
CBU                 : 0
Total               : 0
Aggregate
Single Bit
Device Memory       : 0
Register File       : 0
L1 Cache            : 0
L2 Cache            : 0
Texture Memory      : N/A
Texture Shared      : N/A
CBU                 : N/A
Total               : 0
Double Bit
Device Memory       : 0
Register File       : 0
L1 Cache            : 0
L2 Cache            : 0
Texture Memory      : N/A
Texture Shared      : N/A
CBU                 : 0
Total               : 0
Retired Pages
Single Bit ECC              : 0
Double Bit ECC              : 0
Pending                     : No
Temperature
GPU Current Temp            : 40 C
GPU Shutdown Temp           : 90 C
GPU Slowdown Temp           : 87 C
GPU Max Operating Temp      : 83 C
Memory Current Temp         : 39 C
Memory Max Operating Temp   : 85 C
Power Management            : Supported
Power Draw                  : 58.81 W
Power Limit                 : 250.00 W
Default Power Limit         : 250.00 W
Enforced Power Limit        : 250.00 W
Min Power Limit             : 100.00 W
Max Power Limit             : 250.00 W
Clocks
Graphics                    : 1380 MHz
SM                          : 1380 MHz
Memory                      : 877 MHz
Video                       : 1237 MHz
Applications Clocks
Graphics                    : 1230 MHz
Memory                      : 877 MHz
Default Applications Clocks
Graphics                    : 1230 MHz
Memory                      : 877 MHz
Max Clocks
Graphics                    : 1380 MHz
SM                          : 1380 MHz
Memory                      : 877 MHz
Video                       : 1237 MHz
Max Customer Boost Clocks
Graphics                    : 1380 MHz
Clock Policy
Auto Boost                  : N/A
Auto Boost Default          : N/A
Processes
Process ID                  : 315406
Type                    : C
Name                    : /usr/bin/python
Used GPU Memory         : 31181 MiB

Of course, we haven’t covered all the possible uses of the nvidia-smi tool. To read the full list of options, run nvidia-smi -h (it’s fairly lengthy). Some of the sub-commands have their own help section. If you need to change settings on your cards, you’ll want to look at the device modification section:
    -pm,  --persistence-mode=   Set persistence mode: 0/DISABLED, 1/ENABLED
-e,   --ecc-config=         Toggle ECC support: 0/DISABLED, 1/ENABLED
-p,   --reset-ecc-errors=   Reset ECC error counts: 0/VOLATILE, 1/AGGREGATE
-c,   --compute-mode=       Set MODE for compute applications:
0/DEFAULT, 1/EXCLUSIVE_PROCESS,
2/PROHIBITED
--gom=                Set GPU Operation Mode:
0/ALL_ON, 1/COMPUTE, 2/LOW_DP
-r    --gpu-reset           Trigger reset of the GPU.
Can be used to reset the GPU HW state in situations
that would otherwise require a machine reboot.
Typically useful if a double bit ECC error has
occurred.
Reset operations are not guarenteed to work in
all cases and should be used with caution.
-vm   --virt-mode=          Switch GPU Virtualization Mode:
Sets GPU virtualization mode to 3/VGPU or 4/VSGA
Virtualization mode of a GPU can only be set when
it is running on a hypervisor.
-lgc  --lock-gpu-clocks=    Specifies  clocks as a
pair (e.g. 1500,1500) that defines the range
of desired locked GPU clock speed in MHz.
Setting this will supercede application clocks
and take effect regardless if an app is running.
Input can also be a singular desired clock value
(e.g. ).
-rgc  --reset-gpu-clocks
Resets the Gpu clocks to the default values.
-ac   --applications-clocks= Specifies  clocks as a
pair (e.g. 2000,800) that defines GPU's
speed in MHz while running applications on a GPU.
-rac  --reset-applications-clocks
Resets the applications clocks to the default values.
-acp  --applications-clocks-permission=
Toggles permission requirements for -ac and -rac commands:
0/UNRESTRICTED, 1/RESTRICTED
-pl   --power-limit=        Specifies maximum power management limit in watts.
-am   --accounting-mode=    Enable or disable Accounting Mode: 0/DISABLED, 1/ENABLED
-caa  --clear-accounted-apps
Clears all the accounted PIDs in the buffer.
--auto-boost-default= Set the default auto boost policy to 0/DISABLED
or 1/ENABLED, enforcing the change only after the
last boost client has exited.
--auto-boost-permission=
Allow non-admin/root control over auto boost mode:
0/UNRESTRICTED, 1/RESTRICTED
nvidia-smi dmon -h

GPU statistics are displayed in scrolling format with one line
per sampling interval. Metrics to be monitored can be adjusted
based on the width of terminal window. Monitoring is limited to
a maximum of 4 devices. If no devices are specified, then up to
first 4 supported devices under natural enumeration (starting
with GPU index 0) are used for monitoring purpose.
It is supported on Tesla, GRID, Quadro and limited GeForce products
for Kepler or newer GPUs under x64 and ppc64 bare metal Linux.

Usage: nvidia-smi dmon [options]

Options include:
[-i | --id]:          Comma separated Enumeration index, PCI bus ID or UUID
[-d | --delay]:       Collection delay/interval in seconds [default=1sec]
[-c | --count]:       Collect specified number of samples and exit
[-s | --select]:      One or more metrics [default=puc]
Can be any of the following:
p - Power Usage and Temperature
u - Utilization
c - Proc and Mem Clocks
v - Power and Thermal Violations
m - FB and Bar1 Memory
e - ECC Errors and PCIe Replay errors
t - PCIe Rx and Tx Throughput
[-o | --options]:     One or more from the following:
D - Include Date (YYYYMMDD) in scrolling output
T - Include Time (HH:MM:SS) in scrolling output
[-f | --filename]:    Log to a specified file, rather than to stdout
nvidia-smi topo -h

topo -- Display topological information about the system.

Usage: nvidia-smi topo [options]

Options include:
[-m | --matrix]: Display the GPUDirect communication matrix for the system.
[-mp | --matrix_pci]: Display the GPUDirect communication matrix for the system (PCI Only).
[-i | --id]: Enumeration index, PCI bus ID or UUID. Provide comma
separated values for more than one device
Must be used in conjuction with -n or -p.
[-c | --cpu]: CPU number for which to display all GPUs with an affinity.
[-n | --nearest_gpus]: Display the nearest GPUs for a given traversal path.
0 = a single PCIe switch on a dual GPU board
1 = a single PCIe switch
2 = multiple PCIe switches
3 = a PCIe host bridge
4 = an on-CPU interconnect link between PCIe host bridges
5 = an SMP interconnect link between NUMA nodes
Used in conjunction with -i which must be a single device ID.
[-p | --gpu_path]: Display the most direct path traversal for a pair of GPUs.
Used in conjunction with -i which must be a pair of device IDs.
[-p2p | --p2pstatus]:      Displays the p2p status between the GPUs of a given p2p capability
w - p2p write capability
a - p2p atomics capability
p - p2p prop capability

With this tool, checking the status and health of NVIDIA GPUs is simple. If you’re looking to monitor the cards over time, then nvidia-smi might be more resource-intensive than you’d like. For that, have a look at the API available from NVIDIA’s GPU Management Library (NVML), which offers C, Perl and Python bindings.
There are also tools purpose-built for larger-scale health monitoring and validation. When managing a group or cluster of GPU-accelerated systems, administrators should consider NVIDIA Datacenter GPU Manager (DCGM) and/or Bright Cluster Manager.
Given the popularity of GPUs, most popular open-source tools also include support for monitoring GPUs. Examples include Ganglia, Telegraf, collectd, and Diamond.

### Github update - wrongly committed large file.

I added a large file to a git repository (102Mb), commited and push and got an error due to size limit limitations on github
Here, you see the path of the file (coverage/sensitivity/simualted.bed).
So, the solution is actually quite simple (when you know it): you can use the filter-branch command as follows:
Shell
 1 2 git filter-branch --tree-filter 'rm -rf path/to/your/file' HEAD git push

Here are step-by-step instructions to download a file from Google Drive using the command line API if the file is shared privately and needs authentication.
Get the file ID:
2. Right-click (or control-click) the file you want to download and click “Get shareable link”. The link looks like this: https://drive.google.com/open?id=XXXXX. Make note of the file ID “XXXXX”; you will be needing it below.
Get an OAuth token:
1. Go to OAuth 2.0 Playground
2. In the “Select the Scope” box, scroll down, expand “Drive API v3”, and select https://www.googleapis.com/auth/drive.readonly
3. Click “Authorize APIs” and then “Exchange authorization code for tokens”. Copy the “Access token”; you will be needing it below.
If using OS X or Linux, open the “Terminal” program and enter the following command.


curl -H "Authorization: Bearer YYYYY"  
https://www.googleapis.com/drive/v3/files/XXXXX?alt=media -o ZZZZZ


If using Windows, open the “PowerShell” program and enter the following command.


Invoke-RestMethod -Uri https://www.googleapis.com/drive/v3/files/XXXXX?alt=media 
-Method Get -Headers @{"Authorization"="Bearer YYYYY"} -OutFile ZZZZZ


In your command, replace “XXXXX” with the file ID from above, “YYYYY” with the access token from above, and “ZZZZZ” with the file name that will be saved (for example, “myFile.mp4” if you’re downloading an mp4 file).

Career advice:‬ ‪1 you are not your job‬ ‪2 I can > IQ‬ ‪3 do not speak poorly about others‬ ‪4 there is no career ladder‬; lift others and you become the ladder ‪5 know when it is time to leave‬ ‪6 you do not need a title to lead‬ ‪7 always meet deadlines‬ ‪8 deliver outside of your job description‬ ‪9 stay teachable‬ ‪10 share the credit‬

# [LeetCode] Strobogrammatic Number

A strobogrammatic number is a number that looks the same when rotated 180 degrees (looked at upside down).
Write a function to determine if a number is strobogrammatic. The number is represented as a string.
Example 1:
  Input: "69"
Output: true

Example 2:
  Input: "88"
Output: true 
Example 3:
  Input: "962"
Output: false 
This question defines a symmetrical number, that is, a number is rotated 180 degrees as the original, that is, it is the same as the reverse, such as 609, the reverse is still 609, etc. There are actually few numbers that satisfy this condition, only 0, 1, 8, 6, 9. This question can actually be regarded as a special case of seeking the number of replies. We still use double pointers to detect, so if the first and last two numbers are equal, then only one of them is 0,1,8, if If they are not equal, one must be 6 and one is 9, or one is 9 and one is 6, and all other cases return false, see the code below;
Solution one:

  Class Solution {
Public :
Bool isStrobogrammatic( string num) {
Int l = 0 , r = num.size() - 1 ;
While (l <= r) {
If (num[l] == num[r]) {
If (num[l] != ' 1 ' && num[l] != ' 0 ' && num[l] != ' 8 ' ){
Return false ;
}
} else {
If ((num[l] != ' 6 ' || num[r] != ' 9 ' ) && (num[l] != ' 9 ' 
                      || num[r] != ' 6 ' )) {
Return false ;
}
}
++l; -- r;
}
Return true ;
}
}; 

Since there are not many numbers that satisfy the meaning of the question, we can use the hash table to do all the mappings that match the meaning of the question into the hash table, and then scan the double pointer to see if the two numbers in the corresponding position are hashed. There is a mapping in the table. If it does not exist, it returns false. The traversal completion returns true. See the code as follows:
Solution 2:

  Class Solution {
Public :
Bool isStrobogrammatic( string num) {
Unordered_map < char , char > m {{ ' 0 ' , ' 0 ' }, { ' 1 ' , ' 1 ' }, 
                        { ' 8 ' , ' 8 ' }, { ' 6 ' , ' 9 ' }, { ' 9 ' , ' 6 ' }};
For ( int i = 0 ; i <= num.size() / 2; ++ i) {
If (m[num[i]] != num[num.size() - i - 1 ]) return false ;
}
Return true ;
}
}; 

### Get that job at Google

I've been meaning to write up some tips on interviewing at Google for a good long time now. I keep putting it off, though, because it's going to make you mad. Probably. For some statistical definition of "you", it's very likely to upset you.

Why? Because... well, here, I wrote a little ditty about it:

Hey man, I don't know that stuff
Stevey's talking aboooooout
If my boss thinks it's important
I'm gonna get fiiiiiiiiiired
Oooh yeah baaaby baaaay-beeeeee....

I didn't realize this was such a typical reaction back when I first started writing about interviewing, way back at other companies. Boy-o-howdy did I find out in a hurry.

See, it goes like this:

Me: blah blah blah, I like asking question X in interviews, blah blah blah...

You: Question X? Oh man, I haven't heard about X since college! I've never needed it for my job! He asks that in interviews? But that means someone out there thinks it's important to know, and, and... I don't know it! If they detect my ignorance, not only will I be summarily fired for incompetence without so much as a thank-you, I will also be unemployable by people who ask question X! If people listen to Stevey, that will be everyone! I will become homeless and destitute! For not knowing something I've never needed before! This is horrible! I would attack X itself, except that I do not want to pick up a book and figure enough out about it to discredit it. Clearly I must yell a lot about how stupid Stevey is so that nobody will listen to him!

Me: So in conclusion, blah blah... huh? Did you say "fired"? "Destitute?" What are you talking about?

You: Aaaaaaauuuggh!!! *stab* *stab* *stab*

Me: That's it. I'm never talking about interviewing again.

It doesn't matter what X is, either. It's arbitrary. I could say: "I really enjoy asking the candidate (their name) in interviews", and people would still freak out, on account of insecurity about either interviewing in general or their knowledge of their own name, hopefully the former.

But THEN, time passes, and interview candidates come and go, and we always wind up saying: "Gosh, we sure wish that obviously smart person had prepared a little better for his or her interviews. Is there any way we can help future candidates out with some tips?"

And then nobody actually does anything, because we're all afraid of getting stabbed violently by People Who Don't Know X.

I considered giving out a set of tips in which I actually use variable names like X, rather than real subjects, but decided that in the resultant vacuum, everyone would get upset. Otherwise that approach seemed pretty good, as long as I published under a pseudonym.

In the end, people really need the tips, regardless of how many feelings get hurt along the way. So rather than skirt around the issues, I'm going to give you a few mandatory substitutions for X along with a fair amount of general interview-prep information.

Caveats and Disclaimers

This blog is not endorsed by Google. Google doesn't know I'm publishing these tips. It's just between you and me, OK? Don't tell them I prepped you. Just go kick ass on your interviews and we'll be square.

I'm only talking about general software engineering positions, and interviews for those positions.

These tips are actually generic; there's nothing specific to Google vs. any other software company. I could have been writing these tips about my first software job 20 years ago. That implies that these tips are also timeless, at least for the span of our careers.

These tips obviously won't get you a job on their own. My hope is that by following them you will perform your very best during the interviews.

Oho! Why Google, you ask? Well let's just have that dialog right up front, shall we?

You: Should I work at Google? Is it all they say it is, and more? Will I be serenely happy there? Should I apply immediately?

Me: Yes.

You: To which ques... wait, what do you mean by "Yes?" I didn't even say who I am!

Me: Dude, the answer is Yes. (You may be a woman, but I'm still calling you Dude.)

You: But... but... I am paralyzed by inertia! And I feel a certain comfort level at my current company, or at least I have become relatively inured to the discomfort. I know people here and nobody at Google! I would have to learn Google's build system and technology and stuff! I have no credibility, no reputation there – I would have to start over virtually from scratch! I waited too long, there's no upside! I'm afraaaaaaid!

Me: DUDE. The answer is Yes already, OK? It's an invariant. Everyone else who came to Google was in the exact same position as you are, modulo a handful of famous people with beards that put Gandalf's to shame, but they're a very tiny minority. Everyone who applied had the same reasons for not applying as you do. And everyone here says: "GOSH, I SURE AM HAPPY I CAME HERE!" So just apply already. But prep first.

You: But what if I get a mistrial? I might be smart and qualified, but for some random reason I may do poorly in the interviews and not get an offer! That would be a huge blow to my ego! I would rather pass up the opportunity altogether than have a chance of failure!

Me: Yeah, that's at least partly true. Heck, I kinda didn't make it in on my first attempt, but I begged like a street dog until they gave me a second round of interviews. I caught them in a weak moment. And the second time around, I prepared, and did much better.

The thing is, Google has a well-known false negative rate, which means we sometimes turn away qualified people, because that's considered better than sometimes hiring unqualified people. This is actually an industry-wide thing, but the dial gets turned differently at different companies. At Google the false-negative rate is pretty high. I don't know what it is, but I do know a lot of smart, qualified people who've not made it through our interviews. It's a bummer.

But the really important takeaway is this: if you don't get an offer, you may still be qualified to work here. So it needn't be a blow to your ego at all!

As far as anyone I know can tell, false negatives are completely random, and are unrelated to your skills or qualifications. They can happen from a variety of factors, including but not limited to:

1. you're having an off day
2. one or more of your interviewers is having an off day
3. there were communication issues invisible to you and/or one or more of the interviewers
4. you got unlucky and got an Interview Anti-Loop
Oh no, not the Interview Anti-Loop!

What is it, you ask? Well, back when I was at Amazon, we did (and they undoubtedly still do) a LOT of soul-searching about this exact problem. We eventually concluded that every single employee E at Amazon has at least one "Interview Anti-Loop": a set of other employees S who would not hire E. The root cause is important for you to understand when you're going into interviews, so I'll tell you a little about what I've found over the years.

First, you can't tell interviewers what's important. Not at any company. Not unless they're specifically asking you for advice. You have a very narrow window of perhaps one year after an engineer graduates from college to inculcate them in the art of interviewing, after which the window closes and they believe they are a "good interviewer" and they don't need to change their questions, their question styles, their interviewing style, or their feedback style, ever again.

It's a problem. But I've had my hand bitten enough times that I just don't try anymore.

Second problem: every "experienced" interviewer has a set of pet subjects and possibly specific questions that he or she feels is an accurate gauge of a candidate's abilities. The question sets for any two interviewers can be widely different and even entirely non-overlapping.

A classic example found everywhere is: Interviewer A always asks about C++ trivia, filesystems, network protocols and discrete math. Interviewer B always asks about Java trivia, design patterns, unit testing, web frameworks, and software project management. For any given candidate with both A and B on the interview loop, A and B are likely to give very different votes. A and B would probably not even hire each other, given a chance, but they both happened to go through interviewer C, who asked them both about data structures, unix utilities, and processes versus threads, and A and B both happened to squeak by.

That's almost always what happens when you get an offer from a tech company. You just happened to squeak by. Because of the inherently flawed nature of the interviewing process, it's highly likely that someone on the loop will be unimpressed with you, even if you are Alan Turing. Especially if you're Alan Turing, in fact, since it means you obviously don't know C++.

The bottom line is, if you go to an interview at any software company, you should plan for the contingency that you might get genuinely unlucky, and wind up with one or more people from your Interview Anti-Loop on your interview loop. If this happens, you will struggle, then be told that you were not a fit at this time, and then you will feel bad. Just as long as you don't feel meta-bad, everything is OK. You should feel good that you feel bad after this happens, because hey, it means you're human.

And then you should wait 6-12 months and re-apply. That's pretty much the best solution we (or anyone else I know of) could come up with for the false-negative problem. We wipe the slate clean and start over again. There are lots of people here who got in on their second or third attempt, and they're kicking butt.

You can too.

OK, I feel better about potentially not getting hired

Good! So let's get on to those tips, then.

If you've been following along very closely, you'll have realized that I'm interviewer D. Meaning that my personal set of pet questions and topics is just my own, and it's no better or worse than anyone else's. So I can't tell you what it is, no matter how much I'd like to, because I'll offend interviewers A through X who have slightly different working sets.

Instead, I want to prep you for some general topics that I believe are shared by the majority of tech interviewers at Google-like companies. Roughly speaking, this means the company builds a lot of their own software and does a lot of distributed computing. There are other tech-company footprints, the opposite end of the spectrum being companies that outsource everything to consultants and try to use as much third-party software as possible. My tips will be useful only to the extent that the company resembles Google.

So you might as well make it Google, eh?

First, let's talk about non-technical prep.

The Warm-Up

Nobody goes into a boxing match cold. Lesson: you should bring your boxing gloves to the interview. No, wait, sorry, I mean: warm up beforehand!

How do you warm up? Basically there is short-term and long-term warming up, and you should do both.

Long-term warming up means: study and practice for a week or two before the interview. You want your mind to be in the general "mode" of problem solving on whiteboards. If you can do it on a whiteboard, every other medium (laptop, shared network document, whatever) is a cakewalk. So plan for the whiteboard.

Short-term warming up means: get lots of rest the night before, and then do intense, fast-paced warm-ups the morning of the interview.

The two best long-term warm-ups I know of are:

1) Study a data-structures and algorithms book. Why? Because it is the most likely to help you beef up on problem identification. Many interviewers are happy when you understand the broad class of question they're asking without explanation. For instance, if they ask you about coloring U.S. states in different colors, you get major bonus points if you recognize it as a graph-coloring problem, even if you don't actually remember exactly how graph-coloring works.

And if you do remember how it works, then you can probably whip through the answer pretty quickly. So your best bet, interview-prep wise, is to practice the art of recognizing that certain problem classes are best solved with certain algorithms and data structures.

My absolute favorite for this kind of interview preparation is Steven Skiena's The Algorithm Design Manual. More than any other book it helped me understand just how astonishingly commonplace (and important) graph problems are – they should be part of every working programmer's toolkit. The book also covers basic data structures and sorting algorithms, which is a nice bonus. But the gold mine is the second half of the book, which is a sort of encyclopedia of 1-pagers on zillions of useful problems and various ways to solve them, without too much detail. Almost every 1-pager has a simple picture, making it easy to remember. This is a great way to learn how to identify hundreds of problem types.

Other interviewers I know recommend Introduction to Algorithms. It's a true classic and an invaluable resource, but it will probably take you more than 2 weeks to get through it. But if you want to come into your interviews prepped, then consider deferring your application until you've made your way through that book.

2) Have a friend interview you. The friend should ask you a random interview question, and you should go write it on the board. You should keep going until it is complete, no matter how tired or lazy you feel. Do this as much as you can possibly tolerate.

I didn't do these two types of preparation before my first Google interview, and I was absolutely shocked at how bad at whiteboard coding I had become since I had last interviewed seven years prior. It's hard! And I also had forgotten a bunch of algorithms and data structures that I used to know, or at least had heard of.

Going through these exercises for a week prepped me mightily for my second round of Google interviews, and I did way, way better. It made all the difference.

As for short-term preparation, all you can really do is make sure you are as alert and warmed up as possible. Don't go in cold. Solve a few problems and read through your study books. Drink some coffee: it actually helps you think faster, believe it or not. Make sure you spend at least an hour practicing immediately before you walk into the interview. Treat it like a sports game or a music recital, or heck, an exam: if you go in warmed up you'll give your best performance.

Mental Prep

So! You're a hotshot programmer with a long list of accomplishments. Time to forget about all that and focus on interview survival.

You should go in humble, open-minded, and focused.

If you come across as arrogant, then people will question whether they want to work with you. The best way to appear arrogant is to question the validity of the interviewer's question – it really ticks them off, as I pointed out earlier on. Remember how I said you can't tell an interviewer how to interview? Well, that's especially true if you're a candidate.

So don't ask: "gosh, are algorithms really all that important? do you ever need to do that kind of thing in real life? I've never had to do that kind of stuff." You'll just get rejected, so don't say that kind of thing. Treat every question as legitimate, even if you are frustrated that you don't know the answer.

Feel free to ask for help or hints if you're stuck. Some interviewers take points off for that, but occasionally it will get you past some hurdle and give you a good performance on what would have otherwise been a horrible stony half-hour silence.

Don't say "choo choo choo" when you're "thinking".

Don't try to change the subject and answer a different question. Don't try to divert the interviewer from asking you a question by telling war stories. Don't try to bluff your interviewer. You should focus on each problem they're giving you and make your best effort to answer it fully.

Some interviewers will not ask you to write code, but they will expect you to start writing code on the whiteboard at some point during your answer. They will give you hints but won't necessarily come right out and say: "I want you to write some code on the board now." If in doubt, you should ask them if they would like to see code.

Interviewers have vastly different expectations about code. I personally don't care about syntax (unless you write something that could obviously never work in any programming language, at which point I will dive in and verify that you are not, in fact, a circus clown and that it was an honest mistake). But some interviewers are really picky about syntax, and some will even silently mark you down for missing a semicolon or a curly brace, without telling you. I think of these interviewers as – well, it's a technical term that rhymes with "bass soles", but they think of themselves as brilliant technical evaluators, and there's no way to tell them otherwise.

So ask. Ask if they care about syntax, and if they do, try to get it right. Look over your code carefully from different angles and distances. Pretend it's someone else's code and you're tasked with finding bugs in it. You'd be amazed at what you can miss when you're standing 2 feet from a whiteboard with an interviewer staring at your shoulder blades.

It's OK (and highly encouraged) to ask a few clarifying questions, and occasionally verify with the interviewer that you're on the track they want you to be on. Some interviewers will mark you down if you just jump up and start coding, even if you get the code right. They'll say you didn't think carefully first, and you're one of those "let's not do any design" type cowboys. So even if you think you know the answer to the problem, ask some questions and talk about the approach you'll take a little before diving in.

On the flip side, don't take too long before actually solving the problem, or some interviewers will give you a delay-of-game penalty. Try to move (and write) quickly, since often interviewers want to get through more than one question during the interview, and if you solve the first one too slowly then they'll be out of time. They'll mark you down because they couldn't get a full picture of your skills. The benefit of the doubt is rarely given in interviewing.

One last non-technical tip: bring your own whiteboard dry-erase markers. They sell pencil-thin ones at office supply stores, whereas most companies (including Google) tend to stock the fat kind. The thin ones turn your whiteboard from a 480i standard-definition tube into a 58-inch 1080p HD plasma screen. You need all the help you can get, and free whiteboard space is a real blessing.

You should also practice whiteboard space-management skills, such as not starting on the right and coding down into the lower-right corner in Teeny Unreadable Font. Your interviewer will not be impressed. Amusingly, although it always irks me when people do this, I did it during my interviews, too. Just be aware of it!

Oh, and don't let the marker dry out while you're standing there waving it. I'm tellin' ya: you want minimal distractions during the interview, and that one is surprisingly common.

OK, that should be good for non-tech tips. On to X, for some value of X! Don't stab me!

Tech Prep Tips

The best tip is: go get a computer science degree. The more computer science you have, the better. You don't have to have a CS degree, but it helps. It doesn't have to be an advanced degree, but that helps too.

However, you're probably thinking of applying to Google a little sooner than 2 to 8 years from now, so here are some shorter-term tips for you.

Algorithm Complexity: you need to know Big-O. It's a must. If you struggle with basic big-O complexity analysis, then you are almost guaranteed not to get hired. It's, like, one chapter in the beginning of one theory of computation book, so just go read it. You can do it.

Sorting: know how to sort. Don't do bubble-sort. You should know the details of at least one n*log(n) sorting algorithm, preferably two (say, quicksort and merge sort). Merge sort can be highly useful in situations where quicksort is impractical, so take a look at it.

For God's sake, don't try sorting a linked list during the interview.

Hashtables: hashtables are arguably the single most important data structure known to mankind. You absolutely have to know how they work. Again, it's like one chapter in one data structures book, so just go read about them. You should be able to implement one using only arrays in your favorite language, in about the space of one interview.

Trees: you should know about trees. I'm tellin' ya: this is basic stuff, and it's embarrassing to bring it up, but some of you out there don't know basic tree construction, traversal and manipulation algorithms. You should be familiar with binary trees, n-ary trees, and trie-trees at the very very least. Trees are probably the best source of practice problems for your long-term warmup exercises.

You should be familiar with at least one flavor of balanced binary tree, whether it's a red/black tree, a splay tree or an AVL tree. You should actually know how it's implemented.

You should know about tree traversal algorithms: BFS and DFS, and know the difference between inorder, postorder and preorder.

You might not use trees much day-to-day, but if so, it's because you're avoiding tree problems. You won't need to do that anymore once you know how they work. Study up!

Graphs

Graphs are, like, really really important. More than you think. Even if you already think they're important, it's probably more than you think.

There are three basic ways to represent a graph in memory (objects and pointers, matrix, and adjacency list), and you should familiarize yourself with each representation and its pros and cons.

You should know the basic graph traversal algorithms: breadth-first search and depth-first search. You should know their computational complexity, their tradeoffs, and how to implement them in real code.

You should try to study up on fancier algorithms, such as Dijkstra and A*, if you get a chance. They're really great for just about anything, from game programming to distributed computing to you name it. You should know them.

Whenever someone gives you a problem, think graphs. They are the most fundamental and flexible way of representing any kind of a relationship, so it's about a 50-50 shot that any interesting design problem has a graph involved in it. Make absolutely sure you can't think of a way to solve it using graphs before moving on to other solution types. This tip is important!

Other data structures

You should study up on as many other data structures and algorithms as you can fit in that big noggin of yours. You should especially know about the most famous classes of NP-complete problems, such as traveling salesman and the knapsack problem, and be able to recognize them when an interviewer asks you them in disguise.

You should find out what NP-complete means.

Basically, hit that data structures book hard, and try to retain as much of it as you can, and you can't go wrong.

Math

Some interviewers ask basic discrete math questions. This is more prevalent at Google than at other places I've been, and I consider it a Good Thing, even though I'm not particularly good at discrete math. We're surrounded by counting problems, probability problems, and other Discrete Math 101 situations, and those innumerate among us blithely hack around them without knowing what we're doing.

Don't get mad if the interviewer asks math questions. Do your best. Your best will be a heck of a lot better if you spend some time before the interview refreshing your memory on (or teaching yourself) the essentials of combinatorics and probability. You should be familiar with n-choose-k problems and their ilk – the more the better.

I know, I know, you're short on time. But this tip can really help make the difference between a "we're not sure" and a "let's hire her". And it's actually not all that bad – discrete math doesn't use much of the high-school math you studied and forgot. It starts back with elementary-school math and builds up from there, so you can probably pick up what you need for interviews in a couple of days of intense study.

Sadly, I don't have a good recommendation for a Discrete Math book, so if you do, please mention it in the comments. Thanks.

Operating Systems

This is just a plug, from me, for you to know about processes, threads and concurrency issues. A lot of interviewers ask about that stuff, and it's pretty fundamental, so you should know it. Know about locks and mutexes and semaphores and monitors and how they work. Know about deadlock and livelock and how to avoid them. Know what resources a processes needs, and a thread needs, and how context switching works, and how it's initiated by the operating system and underlying hardware. Know a little about scheduling. The world is rapidly moving towards multi-core, and you'll be a dinosaur in a real hurry if you don't understand the fundamentals of "modern" (which is to say, "kinda broken") concurrency constructs.

The best, most practical book I've ever personally read on the subject is Doug Lea's Concurrent Programming in Java. It got me the most bang per page. There are obviously lots of other books on concurrency. I'd avoid the academic ones and focus on the practical stuff, since it's most likely to get asked in interviews.

Coding

You should know at least one programming language really well, and it should preferably be C++ or Java. C# is OK too, since it's pretty similar to Java. You will be expected to write some code in at least some of your interviews. You will be expected to know a fair amount of detail about your favorite programming language.

Other Stuff

Because of the rules I outlined above, it's still possible that you'll get Interviewer A, and none of the stuff you've studied from these tips will be directly useful (except being warmed up.) If so, just do your best. Worst case, you can always come back in 6-12 months, right? Might seem like a long time, but I assure you it will go by in a flash.

The stuff I've covered is actually mostly red-flags: stuff that really worries people if you don't know it. The discrete math is potentially optional, but somewhat risky if you don't know the first thing about it. Everything else I've mentioned you should know cold, and then you'll at least be prepped for the baseline interview level. It could be a lot harder than that, depending on the interviewer, or it could be easy.

It just depends on how lucky you are. Are you feeling lucky? Then give it a try!