Categories
container emacs

Containerized Jupyter Lab Authentication for Emacs-IPython-Notebook

In my earlier post, I mentioned how to run Apache Spark in a container with Jupyter notebook. While I like to use Jupyter notebook to document my work and share it with my colleagues, I still prefer to use Emacs instead of a web browser to write code. The answer is EIN — Emacs IPython Notebook. Installing EIN is straight forward. The tricky part is to make it working with the containerized Jupyter.

Once you enter EIN, you will need to use the command “ein:login” to connect to Jupyter. Internally it goes through websocket and if you just use your password, as you would from a browser, EIN will throw an error of expired websocket and you will not be able to execute any code or create a new notebook. One fix of this problem is to use the Jupyter token as the password. The token can be found from the command below:

host$ docker exec spark-jupyter jupyter server list
Currently running servers:
http://1e2480237424:8888/?token=4899803e93c22739a8de56fa4deed22aef6568f93025c901 :: /home/jovyan

Jupyter can use both token and password for user authentication based on how it is configured. By default, token changes every time, which makes it more secure but also harder to use. In a secured environment, we can fix the token by define the JUPYTER_TOKEN environment (first line below) and pass it to the Docker container when starting Jupyter (second line):

export JUPYTER_TOKEN="4899803e93c22739a8de56fa4deed22aef6568f93025c901"

host$ docker run --name spark -d -p 8888:8888 -p 4040:4040 -p 4041:4041 -e JUPYTER_TOKEN=$JUPYTER_TOKEN -v $PWD:/home/jovyan --name spark-jupyter jupyter/all-spark-notebook

A better configuration is to run the “jupyter server password” command (new in Jupyter version 5.0 ) in the container, which will save a hashed password in /home/jovyan/.ipython/jupyter_server_config.json. Then you will only need to put the same password when login. Though you can also disable both password and token, I would recommend not to do it.

To enable inline image display in EIN/Emacs, just put this line in your .emacs file:

(setq ein:output-area-inlined-images t)
Categories
container Data

How to Run Apache Spark from a Container

Apach Spark is a large-scale data analytics engine that can utilize distributed computing resources. It supports common data science languages, e.g. Python and R. Its support for Python is provided through the PySpark package. Some advantages of using PySpark over the traditional vanilla Python (numpy and Pandas) are:

  • Speed. Spark can operate on multiple computers. So you don’t have to write your own parallel computing code. People are claiming 100x speed gain using Spark.
  • Scale. You can develop code on a laptop and deploy it on cluster computers to process data at scale.
  • Robust. It won’t crack if some nodes are taken off during the execution time.

Other features, like SparkSQL, Spark ML, and support for data streaming sources bring additional advantages.

After a quick tryout of the Spark container image from Bitnami, I moved on to another image released by Jupyter stack with good documentation. To run the container and expose the Jupyter notebook and share the current host directory with the container, use this command:

docker run -d -p 80:8888 -p 4040:4040 -p 4041:4041 -v ${PWD}:/home/jovyan jupyter/all-spark-notebook

If you need to install additional packages to the container image provided, you could install them by either going inside the container (“docker exec -it spark /bin/bash” or modifying the original docker-compose.yml file.

Categories
Data

Extract Hidden URLs from PDF files

To search for URLs in a PDF file, one can use the built-in search function in PDF readers and look for strings like “http.” However, when the URL is hidden, a simple string search will not work in the reader. A more reliable way to get all the URLs is to use the “pdftotext” program to convert the PDF file to text format, then use the “grep” command to catch all the URLs. The “-raw” option helps to keep the order of the text in place. The example code below converts the file to pdf, and identify the generated txt file by looking at the latest file file, grep the http, then delete the txt file.

pdftotext -raw ml.pdf && file=ls -tr|tail -1; grep http $file; rm $file

Categories
Data

Query SRA Sequence Runs with Python

Retrieving data from SRA is a common task. NCBI has provided a nice tool collection named E-utilities to query and retrieve data from it. The example Python snippet below shows how to query NCBI SRA database using sample identifiers and get a table of linked NCBI BioProject, BioSample, Run, Download location and Size.

import sys, os                                                                                                                                                                                     
import subprocess                                                                                                                                                                                  
import shlex                                                                                                                                                                                       
import pandas as pd 
.....


def get_SRR_from_biosamples(csv: str, batch_size=10, debug=True):                                                                                                                                  
    """Gete SRA run ID from BioSample ID.                                                                                                                                                          
    """                                                                                                                                                                                            
    epost_cmd = 'epost -db biosample -format acc'                                                                                                                                                  
    elink_cmd = 'elink -target sra'                                                                                                                                                                
    efetch_cmd = 'efetch -db sra -format runinfo -mode xml'                                                                                                                                        
    xtract_cmd = """xtract -pattern Row -def "NA" -element BioProject\n                                                                                                                            
     BioSample Run download_path size_MB"""                                                                                                                                                        
                                                                                                                                                                                                   
    sample_ids = []                                                                                                                                                                                
    results = []                                                                                                                                                                                   
                                                                                                                                                                                                   
    with open(csv, 'r') as fh:                                                                                                                                                                     
        total_samples = fh.readlines()                                                                                                                                                             
        print(f'Total samples: {total_samples}')                                                                                                                                                   
        for idx, l in enumerate(total_samples):                                                                                                                                                    
            l = l.rstrip()                                                                                                                                                                         
            sample_ids.append(l)                                                                                                                                                                   
            batch_num = int(idx/batch_size) + 1                                                                                                                                                    
            run_flag = True                                                                                                                                                                        
            if debug:                                                                                                                                                                              
                if batch_num > 1:                                                                                                                                                                  
                    print('Debug mode. Stop execution after 1 batch.')                                                                                                                             
                    run_flag = None                                                                                                                                                                
                    break                                                                                                                                                                          
            if run_flag:                                                                                                                                                                           
                if  ((idx+1)%batch_size == 0) | (idx == len(total_samples) - 1):                                                                                                                   
                    print(f'Processing batch {batch_num}: {sample_ids}')                                                                                                                           
                    batch_results = []                                                                                                                                                             
                                                                                                                                                                                                   
                    sample_ids = ','.join(sample_ids)                                                                                                                                              
                    epost_cmd += f' -id "{sample_ids}"'                                                                                                                                            
                    epost = subprocess.Popen(shlex.split(epost_cmd),                                                                                                                               
                                             stdout=subprocess.PIPE,                                                                                                                               
                                             encoding='utf8')                                                                                                                                      
                    elink = subprocess.Popen(shlex.split(elink_cmd),                                                                                                                               
                                             stdin=epost.stdout,                                                                                                                                   
                                             stdout=subprocess.PIPE,                                                                                                                               
                                             encoding='utf8')                                                                                                                                      
                    efetch = subprocess.Popen(shlex.split(efetch_cmd),                                                                                                                             
                                              stdin=elink.stdout,                                                                                                                                  
                                              stdout=subprocess.PIPE,                                                                                                                              
                                              encoding='utf8')                                                                                                                                     
                    xtract = subprocess.Popen(shlex.split(xtract_cmd),                                                                                                                             
                                              stdin=efetch.stdout,                                                                                                                                 
                                              stdout=subprocess.PIPE,                                                                                                                              
                                              encoding='utf8')                                                                                                                                     
                                                                                                                                                                                                   
                    while epost.returncode is None:                                                                                                                                                
                        epost.poll()                                                                                                                                                               
                                                                                                                                                                                                   
                    for l in xtract.stdout.readlines():                                                                                                                                            
                        if not l.startswith('PRJ'):  # "502 Bad Gateway" when server is busy                                                                                                       
                            sys.stderr.write(f'Error processing {sample_ids}: {l}')                                                                                                                
                        else:                                                                                                                                                                      
                            if debug:                                                                                                                                                              
                                print(l.rstrip())                                                                                                                                                  
                            batch_results.append(l.split())                                                                                                                                        
                    print(f'\nTotal SRA Runs in batch {batch_num}: {len(batch_results)}.\n')                                                                                                       
                    results.extend(batch_results)                                                                                                                                                  
                    sample_ids = []                                                                                                                                                                
    print(f'Total runs in collection: {len(results)} with {idx+1} samples.')                                                                                                                       
                                                                                                                                                                                                   
    data = pd.DataFrame(results, columns=['BioProject', 'BioSample', 'Run', 'Download', 'size_MB'])                                                                                                
    return data                                                                                      

These E-utilities tools are used and need to be accessible from the environment: epost, elink, efetch, xtract. The subprocess module in Python is used to chain together these steps similar to Linux pipes. The samples are queried in batches to prevent too frequent queries to NCBI, which could lead to blocking of your future queries. After receiving the sample run identifiers, one can use the prefetch tool from E-utilities to download the files. And, of course, prefetch can be wrapped and chained together as well.

Categories
Uncategorized

PyTorch Geometric and CUDA

PyTorch Geometric (PyG) is an add-on library for developing graph neural networks using Python. It supports CUDA but you’ll have to make sure to install it correctly. Below is one error message I got after installing PyG:

from torch_geometric.data import Data
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
...

OSError: /anaconda3/lib/python3.7/site-packages/torch_sparse/_version_cuda.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSs

It is clear this error is related to CUDA version. So, I checked it:

print(torch.version.cuda, torch.version)
10.2, 1.9.0

Running $ nvidia-smi, gave a CUDA version 11.2. So my system was somehow messed up with mixed versions of CUDA. To fix the mess and get PyG working, I did the following:

$ pip uninstall torch
$ pip install torch===1.9.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
$ pip install torch-scatter -f https://data.pyg.org/whl/torch-1.9.1+cu111.html
$ pip install torch-sparse -f https://data.pyg.org/whl/torch-1.9.1+cu111.html
$ pip install torch-geometric
$ apt-get install nvidia-modprobe

Note that there is no existing wheel built with CUDA 11.2 (cu112) so I used the closest version (cu111). Now PyG works! The “nvidia-modprobe” kernel extension fixes “RuntimeError: CUDA unknown error – this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero,” which I got after having two Python sessions running and both trying to using CUDA.

Update from some other testing regarding these errors:

RuntimeError: Detected that PyTorch and torch_cluster were compiled with different CUDA versions. PyTorch has CUDA version 11.1 and torch_cluster has CUDA version 10.2. Please reinstall the torch_cluster that matches your PyTorch install.

RuntimeError: Detected that PyTorch and torch_spline_conv were compiled with different CUDA versions. PyTorch has CUDA version 11.1 and torch_spline_conv has CUDA version 10.2. Please reinstall the torch_spline_conv that matches your PyTorch install.

The following commands fixed it:

$ pip install --upgrade pip
$ CUDA=cu111
$ TORCH=1.9.1
$ pip install torch-cluster==1.5.9 -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
$ pip install torch-spline-conv -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html

Categories
Linux

Update R and Bioconductor

It’s not a straightforward thing to update R and Bioconductor (a bioinformatics package collection for R), especially if you used the R package that comes with your Linux distribution. For example, the R package from Ubuntu package repository is still 3.5, while the latest version from the r-project site is already 4.0.3. While using an older version of R itself for statistics may not be a problem, many of the Bioconductor packages do have updates and bug fixes that require R v4. And to make things worse, updating Bioconductor itself is a painful process. Below is what I did to update R, Bioconductor, and associated packages. The main lesson learned here is not to use the R package from Ubuntu repository.

  1. Remove R installed from the Ubuntu repository;
  2. Install R from r-project by adding it as an apt repository;
  3. Update R packages (system-wide);
  4. Update R packages installed in the user directory;
  5. Update BiocManager
sudo apt-get purge r-base* r-recommended r-cran-*
sudo apt autoremove
sudo apt update
sudo add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
sudo apt update
sudo apt install r-base r-base-core r-recommended r-base-dev
update.packages(ask = FALSE, checkBuilt = TRUE)
if (!requireNamespace("BiocManager", quietly = TRUE))
    install.packages("BiocManager")
BiocManager::install(version = "3.12")

The R code section may need to run twice once as a sudo user and once as a normal user if you also have packages installed under a user name (that is to say not a system-wide installation from system admin). The last command tells BiocManager (Bioconductor package manager) to install Bioconductor version 3.12 and update all installed Bioconductor pages.

Categories
Uncategorized

Git with Python

There are times when you want to do the source code management programmatically, and here comes GitPython. GitPython wraps the git commands so that you can execute most git functions from within Python (and without using shutils or subprocess). The example code snippet below shows how to do a “git clone” and if the destination is already there, it will try a “git checkout” first from the “master” branch if it exists then the “main” branch if the master branch does not exist.

try:
    git.Repo.clone_from(remote_repo, local_repo)
except GitCommandError as error:
    if error.status == 128:
        repo = git.Repo(local_repo)
        branch = 'master'
        try:
            repo.git.checkout(branch)
        except GitCommandError:
            branch = 'main'  
            repo.git.checkout(branch)
                
    else:
        msg = ' '.join(error.command)
        msg += error.stderr
        sys.exit(f'Error in running git: {msg}.')
Categories
Uncategorized

Signing Git Commit using GPG

I have been enjoying using Magit in Emacs to do all the git related stuff and run into an error when tagging a release. The error message is

git … tag --annotate --sign -m my_msg
error: gpg failed to sign the data
error: unable to sign the tag

This turned out to be caused by the fact that I have not set up gpg signing and signature. Below is how the problem is fixed and from now on all my git commits are going to be signed.

$ gpg --gen-key

There were a few dialogues between these commands, e.g. asking for names, e-mail, secret key, and it is recommended that you type random keys after these questions so that when gpg generate randoms there is more entropy. In the end, you will see some text with a line like this:

gpg: key 404NOTMYREALKEYID marked as ultimately trusted

This string “404NOTMYREALKEYID” is the key id. The same key id also shows up in the output of the following command:

$ gpg --list-secret-keys --keyid-format LONG
.....
---------------------------
sec   rsa3072/404NOTMYREALKEYID ......

Finally, just registering this key id with git. And the problem is solved. So the problem is not in Magit, but my configuration, since Magit uses the “–sign” option when it calls Git, which is actually a good practice.

$ git config --global commit.gpgsign true
$ git config --global user.signingkey 404NOTMYREALKEYID

Categories
Uncategorized

Some Improvements of the Python Language

Recently I have been reading a lot of Python code and noticed several improvements of the language that are really useful and wished that I had knew earlier since some of the changes were added to the Python language specification back from Python 3.5.

One improvement is the Type Hints (PEP 484, implemented in Python 3.5). It allows developers to add a type after a function parameter and a return type, which allows the code reader or another developer to understand what types are intended for the parameters. There is no checking of the actual parameter types being passed. The following example from the PEP 484 page explains its usage.

def greeting(name: str) -> str:
    return 'Hello ' + name

Another nice improvement is the new string formatting method introduced in Python 3.6 under “Formatted string literals.” It allows one to add Python variable values (or even calls) to a string object. Previously one has to use the “.format()” method to format a string. The example below explains how convenient it is now with the method.

def greeting(name: str) -> str:
    return f'Hello {name}'
Categories
emacs Linux

Configuration for Developing R code with ESS

It has been close to 10 years since I use R seriously last time. Back then I used ESS in Emacs to do all the editing and interactive sessions. I really liked it back then and I would like to check it out how ESS has evolved through the years. A quick glimpse of the ESS manual suggests it has got a lot of improvements and the Bioconductor project has evolved as well. For example, even the package installation package has changed in Bioconductor.

Unfortunately, I have forgotten the tricks and configurations that I have used to set up my ESS and Emacs environment. Below are a few things that I had to go through.

$sudo apt install -y r-cran-devtools
$sudo apt install libcurl4-gnutls-dev

The above two packages are required for quite some other R packages. To make sure the user library path is loaded correctly, I had to set the following environmental variable:

export R_LIBS="~/.local/R/lib"

The above setting works for starting R in Terminal. To make the local R library (in my home directory with no root privileges required) work with the ESS R session, I had to create a ~/.Rprofile file with the following content:

.libPaths("~/.local/R/lib")

Now my Emacs speaks ESS well. I can install packages to my local libraries by default and use Emacs to edit R code in one frame and run an interactive R session in another frame and use “C-c C-c” to execute a selected region in the R code frame.