Sunday, 29 April 2018

Internet and privacy: useful tools

Hi all,

In this post, I will talk about something different than sciences. It is about privacy on internet. The recent scandal of Facebook - Cambridge Analytica has revealed what was known for a long time: if a service is 100% free, the likelihood you are paying with your data is great. And data could be anything about you on internet. Think about what you searched on internet, pictures of your kids on Instagram, where you have been with your smartphone (GPS), any email you exchanged (health matter, commercial transaction, bank account). I had a yahoo email account for 20 years. It was great for a long time, but started to goes down in the last years. And more worryingly, I was part of the hacked batch (like other 3 billions) a few years ago. Anyway, some of my friends received spam emails from (and I got some too). So I started to switch some of my internet services.

I will now present some options for adding a bit more privacy while using internet (web browser, web searches, emails and instant messaging).

Web browser:

Mozilla Firefox:

I am a longtime user of Firefox. The new core engine ("Quantum") is incredibly fast.

Some useful extensions:

- HTTPS everywhere:
==> check if the website can have 'https' instead of 'http' (in case they have both)

- NoScript Firefox extension:
=> block any script. A bit tedious to set up white list, but then it is great.

- uBlock Origins:
=> also block ads and various things.

Other tools (not tested):
- uMatrix:

Web searches:


It is a free search engines, that agglomerate results from Wikipedia, Bing, Yahoo and others.
It doesn't track/keep your searches (instead of Google).
The business model is based on feature ads, but not personalised.

You can easily switch it as your main engine in Firefox, Safari or Chrome.
(i.e. Firefox: Preferences => Search => Default Search Engine).

It also has some geeky useful features:

Theme settings:

Many features to alter the webpage theme.


Country location: 

You can specific the country of searches, or agnostic.


Cheat sheet:

Try these searches:
firefox cheat sheet
python cheat sheet
pandas cheat sheet
emacs cheat sheet

Instant Answers:

For example, they provide highlight from Stack overflow if your search is IT oriented.

pandas check float
seaborn time series


Bang provides direct access to your webservices, using the keyword "!bang" in front of your search.

!pubmed protein structure evolution codeml
!uniprot TLR4_human
!m camden town
!w hawking

A list for academia bangs is here:

An other alternative to DuckDuckGo would be Qwant:
It has the advantage of been hosted in Europe, if someone doesn't want to have any data in US (personally I don't mind).



As mentioned, I used to have Yahoo! mail. After the hack, I then shifted to ProtonMail, and I am still using it (even moved to the monthly subscription option).

Some advantages:

- End-to-end encrypted email, locally stored in Switzerland.
- Open source protocol:
- Encryption algorithm proof-tested.
- Free (with limited space).
- Subscription option with reasonable price, allows more emails and more space.
- Clean web interface and app for iOS and Android.
- Team communicates on Twitter in case of new features or problems.
- Bridge app to use with Outlook, Thunderbird, Apple Mail (never tried):


- No possibility to directly search in the body of email, due to encryption.
- No IMAP (for the moment), but  bridge allows the integration with clients.
- And of course, the end-to-end encrypted encryption only works if your contacts also use encrypted emails.

An alternative would be FastMail:

Instant messaging:


While WhatsApp is the mostly used app, it belongs to Facebook, and it has been revealed they are exchanging data:

Signal is a good alternative:
- More or less same functions as WhatsApp (messages, photo, phone call, video call).
- Open source:
- Encryption proof-tested.

Another option would be Telegram:
I never tried it. And its encryption algorithm has never been proof-tested.


I presented some services I liked. I don't say we should give up all services provided by the GAFA (i.e this blog is hosted on Blogger, from Google), but we should be aware of the trade-off for using free services.

I might update this page if I find other interesting tools in the future.

That's it. ^^

Sunday, 24 September 2017

Update about this blog / Contact

Some updates about this blog

Dear all,

I started this blog six years ago. Reading the first post ( "I hope I will be able to keep a good pace during the year (I set an objective of two articles/month)." => I was indeed quite optimistic! But it seems the fate of many blogs. The direction I took was finally slower and more oriented towards technical things. But it seems to fill a gap, considering the number of views and comments per month, especially for the tutorials on bioinformatics tools.

I would like to share some relevant updates.

Moving from academia to industry

Early this year, I made a big change in my professional life, by moving from academia to industry (after 12 years in academia, since the start of my PhD study). By discussing with many PhD students and postdocs, I will likely write a post detailing this transition. This could be of great help for academic researchers and currently afraid of their professional future (which is natural, considering that only 3% of PhD students will get a faculty position).

Email address

As I recently changed of employer, my previous institutional email address was closed. You can now contact me at the following address (which is not my professional address), for questions or dataset sharing:
Remember that all emails are treated privately, in particular if you send me unpublished dataset.

Donate button

A few months ago, I set up a donate button on the right. First, I would like to thank all who already made a contribution, this is very much appreciated. I will mainly use it to buy new scientific books.

For the precision, this donation is purely optional, and I will still try to answer your questions to the best of my knowledge (and my own pace, which can be slow, sorry for procrastinating). This blog will still be free. As I appreciate when people answer to my questions, I feel beholden to do the same in return. But of course, if I see more encouragement, this might motivate me to write more posts.


Finally, I started to put some advertising on this blog. I will continue this for a few months, but I have the feeling I will remove it after a while. The blog doesn't have enough visits for such system.

Best regards,

Friday, 14 April 2017

Enable java applications in Firefox 52 and above

I was trying to use Phylowidget (, which is a great tool to edit phylogenetic trees (i.e. regrafting branches to other branches):

The problem is it doesn't seem to work anymore!

Why? Simply because Firefox 52 now blocks every plugins except Flash:

Here is a solution to allow Firefox to use others plugins than Flash:

1) Go to main menu Apple -> System Preferences, then Java (very bottom).
Then go to Security panel, and add the url to the Exception Site list.
Here is my parameters:

2) In the Firefox title bar, write:
=> select: I accept the risk!

3) To the very right of the list, do right-click and select:

Then write: plugin.load_flash_only

And set it to "false".

4) Restart Firefox and finally go again to Phylowidget (
Now it should work. And this should be similar step with any other java applet, such Jalview (

Saturday, 18 June 2016

Detecting pervasive positive selection with site-models from CodeML / PAML

Disclaimer: Please don't hesitate to contact me if there is anything which is not working on your computer, or any thing unclear, or even comments to improve it.
My email address:

Theoretical principles:
When mutations are advantageous for the fitness, they are propagated at a higher rate in the population. The selective pressure can be computed by the dN/dS ratio (ω). dS represents the synonymous rate (keeping the same amino acid) and dN the non-synonymous rate (changing the amino acid). In the absence of evolutionary pressure (genetic drift), the synonymous and non-synonymous rates are supposed to be equal, so the dN/dS ratio is equal to 1. Under purifying selection, natural selection prevents the replacement of amino acidS, so the dN will be lower than the dS, and dN/dS < 1. And under positive selection, when mutations are advantageous for the fitness, they are propagated at a higher rate in the population, so the replacement rate of amino acid is favoured by selection, and dN/dS > 1.

We can distinguish two types of positive selection: pervasive positive selection and episodic positive selection. The former implies that a site will be under continuous changes (i.e. adapting to pathogens under arm-race), while the later implies that a site will change once and then be kept in the clade (i.e. providing an advantage in a new environment). We can detect the later using the branch-site model, for which I wrote the previous tutorial:

To detect pervasive positive selection, we will use the site models from CodeML/PAML. Those models allow the clustering of aligned columns (sites) in different groups, each group having a different dN/dS value. There are many different sites models in CodeML, all assuming that the dN/dS ratio is the same across branches, but different between sites.

Here are the different models we will use in this tutorial:

M0: one unique dN/dS for all sites. This is the most basic model.

M1a: assumes two categories of sites: sites with dN/dS<1 (negative selection) and sites with dN/dS =1 (neutral evolution).

M2a: assumes three categories of sites: sites with dN/dS<1 (negative selection), sites with dN/dS=1 (neutral evolution) and sites with dN/dS>=1 (positive selection).

M3: assumes multiple categories of selection, not necessarily positive selection.

assumes 10 categories following a beta-distribution of sites, all with different dN/dS <=1.

M8: assumes 10 categories following a beta-distribution of sites, grouped, all with
different dN/dS <=1, and an additional 11th category with dN/dS >=1 (positive selection allowed).

M8a: assumes 10 categories following a beta-distribution of sites, grouped, all with dN/dS <=1, and an additional 11th
category with dN/dS =1 (no positive selection allowed).


For this practical, you will have to install some tools and downloadS some of my scripts.

- Python2:
- BioPython:
- Jalview:
- Newick utilities:
- PyMOL:
- R:
- RevTrans:
- TrimAl:

You can install many of these packages with Homebrew or Linuxbrew: 

brew install python
brew install R
brew install homebrew/science/mafft
brew install homebrew/science/newick-utils
brew install homebrew/science/paml
brew install homebrew/science/pymol
brew install homebrew/science/trimal 

The python scripts you need to install:


You can download them from my GitHub account:

And install them in any accessible directory (i.e. in the working directory or in the $PATH list).


We will focus on the major histocompatibility complex (MHC) protein, which detects peptides from pathogens. As this gene is in the front line against invaders, it is submitted to strong selective pressure to rapidly detect new antigenic peptides. Early work on positive selection was focused on the MHC, so this is a very good example for this practical.

The uniprot code is HLA class II histocompatibility antigen, DQ beta 1 chain.
The Ensembl gene id is: ENSG00000179344.

# Download data from Ensembl:

Go on Ensembl website, and search for ENSG00000179344

1) Download orthologues sequence:
Comparative Genomics => Orthologues => Download orthologues 

Then choose: Fasta, Unaligned sequences – CDS).

2) Download subtree:

Comparative Genomics => Gene tree
Click on the blue node to select the following group: “Placental mammals ~100 MYA (Boreoeutheria)”
Gene Count    32

=> Export sub-tree   Tree or Alignment
=> Format Newick, options "Full (web)" and "Final (merged) tree".

You have now two starting files:

Sequences file: Human_HLA_DQB1_orthologues.fa
Tree file: HLA_DQB1_gene_tree.nh

We are now going to process these files, in order to generate an alignment, visualise it, remove spurious sequences and columns.

# First, let’s rename these files to have shorter names

cp Human_HLA_DQB1_orthologues.fa HLA_DQB1.cds.fasta
cp HLA_DQB1_gene_tree.nh HLA_DQB1.nh

# Remove the species tag in the gene name

./ HLA_DQB1.nh > HLA_DQB1.tree

# Extract gene names with Newick Utilities 

nw_labels -I HLA_DQB1.tree > HLA_DQB1_names.txt 

# Extract CDS sequences that are in the tree, according to the extracted names

./ HLA_DQB1_names.txt HLA_DQB1.cds.fasta > HLA_DQB1_subset.cds.fasta

# Translate CDS sequences to Amino Acid sequences using python script:

./ HLA_DQB1_subset.cds.fasta > HLA_DQB1_subset.aa.fasta

# Make an alignment of these Amino Acid sequences

mafft-linsi HLA_DQB1_subset.aa.fasta > HLA_DQB1_subset.aa.mafft.fasta

# Align CDS sequences by mapping them on the AA Alignment HLA_DQB1_subset.cds.fasta HLA_DQB1_subset.aa.mafft.fasta > HLA_DQB1_subset.cds.mafft.fasta

# With Jalview, load Human_HLA_DQB1_orthologues_subset.cds.mafft.fasta

# Visualise the alignment to see if there isn’t anything wrong.
# Move human sequence (ENSP00000364080) on top (use the key arrows). This 

# is to tell CodeML to use ENSP00000364080 as reference.
# Save the alignment as FASTA file again (CTRL-S).

# Remove spurious sequences and columns with TrimAl

trimal -automated1 -in HLA_DQB1_subset.cds.mafft.fasta -resoverlap 0.75 -seqoverlap 85 -out HLA_DQB1_subset.cds.mafft.trimal.fasta -htmlout HLA_DQB1_subset.cds.mafft.trimal.html -colnumbering > HLA_DQB1_subset.cds.mafft.trimal.cols

# Convert trimed sequences from FASTA to PHYLIP HLA_DQB1_subset.cds.mafft.trimal.fasta HLA_DQB1_subset.cds.mafft.trimal.phy

# Extract gene names to id.list

grep ">" HLA_DQB1_subset.cds.mafft.trimal.fasta | cut -c 2- > id.list

# Using Newick Utilities, we load the id file to extract a pruned subtree from the starting tree (contains only the taxa from the alignment).

id_list=`cat id.list`
echo "$id_list"
nw_prune -v HLA_DQB1.tree $id_list > HLA_DQB1_subset.tree

# Now we end up with two files:

Alignment: HLA_DQB1_subset.cds.mafft.trimal.phy
Tree: HLA_DQB1_subset.tree

2) Estimation of evolutionary values

This is the core of the tutorial. We will use codeml with three different control files (.ctl). Each computation could take up to 30-60 minutes, depending of your CPU.

# Compute many different site models: M0, M1a, M2a, M3 and M7. Save the following commandS in HLA_DQB1_M0M1M2M3M7M8.ctl file.

     seqfile = HLA_DQB1_subset.cds.mafft.trimal.phy  * sequence data file name
    treefile = HLA_DQB1_subset.tree                  * tree structure file name
     outfile = HLA_DQB1_M0M1M2M3M7M8.mlc             * main result file name

       noisy = 9   * 0,1,2,3,9: how much rubbish on the screen
     verbose = 1   * 1: detailed output, 0: concise output
     runmode = 0   * 0: user tree;  1: semi-automatic;  2: automatic
                   * 3: StepwiseAddition; (4,5):PerturbationNNI; -2: pairwise

     seqtype = 1   * 1:codons; 2:AAs; 3:codons-->AAs
   CodonFreq = 2   * 0:1/61 each, 1:F1X4, 2:F3X4, 3:codon table
       clock = 0   * 0: no clock, unrooted tree, 1: clock, rooted tree
      aaDist = 0   * 0:equal, +:geometric; -:linear, {1-5:G1974,Miyata,c,p,v}
       model = 0   * models for codons:
                   * 0:one, 1:b, 2:2 or more
dN/dS ratios for branches
     NSsites = 0 1 2 3 7 8 * 0:one w; 1:NearlyNeutral; 2:PositiveSelection;       
                           * 3:discrete; 4:freqs; 5:gamma;6:2gamma;
                           * 7:beta;8:beta&w;9:beta&gamma;10:3normal

       icode = 0   * 0:standard genetic code; 1:mammalian mt; 2-10:see below
       Mgene = 0   * 0:rates, 1:separate; 2:pi, 3:kappa, 4:all
   fix_kappa = 0   * 1: kappa fixed, 0: kappa to be estimated
       kappa = 2   * initial or fixed kappa
   fix_omega = 0   * 1: omega or omega_1 fixed, 0: estimate
       omega = 1   * initial or fixed omega, for codons or codon-based AAs

       getSE = 0       * 0: don't want them, 1: want S.E.s of estimates
RateAncestor = 0       * (0,1,2): rates (alpha>0) or ancestral states (1 or 2)
  Small_Diff = .45e-6  * Default value.
   cleandata = 0       * remove sites with ambiguity data (1:yes, 0:no)?
 fix_blength = 0       * 0: ignore, -1: random, 1: initial, 2: fixed

# And execute it (this can take ~ 30-45 minutes)::

codeml HLA_DQB1_M0M1M2M3M7.ctl

# Important! Copy rst file to another name:

cp rst HLA_DQB1_M0M1M2M3M7.rst.txt

# One last thing is to compute site model M8a, which is the same as M8, except we fix the dN/dS to 1 (only negative selection and neutral evolution allowed). Save the following commandS in the file

     seqfile = HLA_DQB1_subset.cds.mafft.trimal.phy   * sequence data file name
    treefile = HLA_DQB1_subset.tree                  * tree structure file name
     outfile = HLA_DQB1_M8a.mlc                      * main result file name

       noisy = 9   * 0,1,2,3,9: how much rubbish on the screen
     verbose = 1   * 1: detailed output, 0: concise output
     runmode = 0   * 0: user tree;  1: semi-automatic;  2: automatic
                   * 3: StepwiseAddition; (4,5):PerturbationNNI; -2: pairwise

     seqtype = 1   * 1:codons; 2:AAs; 3:codons-->AAs
   CodonFreq = 2   * 0:1/61 each, 1:F1X4, 2:F3X4, 3:codon table
       clock = 0   * 0: no clock, unrooted tree, 1: clock, rooted tree
      aaDist = 0   * 0:equal, +:geometric; -:linear, {1-5:G1974,Miyata,c,p,v}
       model = 0   * models for codons:
                   * 0:one, 1:b, 2:2 or more dN/dS ratios for branches
     NSsites = 8   * 0:one w; 1:NearlyNeutral; 2:PositiveSelection; 3:discrete;
                   * 4:freqs; 5:gamma;6:2gamma;

                   * 7:beta;8:beta&w;9:beta&gamma;10:3normal
       icode = 0   * 0:standard genetic code; 1:mammalian mt; 2-10:see below
       Mgene = 0   * 0:rates, 1:separate; 2:pi, 3:kappa, 4:all
   fix_kappa = 0   * 1: kappa fixed, 0: kappa to be estimated
       kappa = 2   * initial or fixed kappa
   fix_omega = 1   * 1: omega or omega_1 fixed, 0: estimate
       omega = 1   * initial or fixed omega, for codons or codon-based AAs

       getSE = 0       * 0: don't want them, 1: want S.E.s of estimates
RateAncestor = 0       * (0,1,2): rates (alpha>0) or ancestral states (1 or 2)
  Small_Diff = .45e-6  * Default value.
   cleandata = 0       * remove sites with ambiguity data (1:yes, 0:no)?
 fix_blength = 0       * 0: ignore, -1: random, 1: initial, 2: fixed

# And execute (this can take ~ 5-10 minutes):

codeml HLA_DQB1_M8a.ctl

3) Results

3.1) Identification of positive selection

Have a look at the mlc files. If you want to retrieve the log-likelihood values:
grep "lnL" *.mlc

HLA_DQB1_M0M1M2M3M7M8.mlc:lnL(ntime: 32  np: 34):  -4838.327776      +0.000000
HLA_DQB1_M0M1M2M3M7M8.mlc:lnL(ntime: 32  np: 35):  -4711.107980      +0.000000
HLA_DQB1_M0M1M2M3M7M8.mlc:lnL(ntime: 32  np: 37):  -4692.732347      +0.000000
HLA_DQB1_M0M1M2M3M7M8.mlc:lnL(ntime: 32  np: 38):  -4692.078126      +0.000000
HLA_DQB1_M0M1M2M3M7M8.mlc:lnL(ntime: 32  np: 35):  -4718.466841      +0.000000
HLA_DQB1_M0M1M2M3M7M8.mlc:lnL(ntime: 32  np: 37):  -4690.545425      +0.000000
HLA_DQB1_M8a.mlc:         lnL(ntime: 32  np: 36):  -4706.268471      +0.000000


The order of lines being: M0, M1, M2, M3, M7, M8 and M8a. For each model, you directly get the number of parameters (np) and the log-likelihood value:


Using these models, we can construct four likelihood-ratio tests (LRT), where three of them will tell us if there is significant positive selection or not.

1) M0-M3: this one is an exception and will only tell us if there are different categories of sites under different selective pressures. This test is not used to detect positive selection, and it is nearly always significant.

2x(L1-L0) = 2x[(-4692.0781) – (-4838.3278)] = 292.4993
d.f. = 38-34 = 4
=> 4.49405E-62

2) M1a-M2a: this test was the first site model developed to detect positive selection. We contrast a model with 2 classes of sites against a model with 3 classes of sites. Degree of freedom = 2.

2x(L1-L0) = 2x[(-4692.7323) – (-4711.1080)] = 36.7513
d.f. = 37-35 = 2
=> 1.04608E-08

The test is significant, so there is positive selection. This model is very conservative, and can lack power under certain conditions.

3) M7-M8: this test also detects positive selection. We contrast a model with 10 classes of sites against a model with 11 classes of sites. Degree of freedom = 2.

2x(L1-L0) = 2x[(-4690.5454) – (-4838.3278)] = 55.842832
d.f. = 37-35 = 2
=> 7.47968E-13

The test is significant, so there is positive selection. However, this model can have problem power under certain conditions, and the following LRT is preferred.

4) M8-M8a: this is the latest test. We contrast a model with 11 classes of sites where positive is not allowed (dN/dS=1) against a model with 11 classes of sites where positive is allowed (dN/dS >=1). Degree of freedom = 1.

2x(L1-L0) = 2x[(-4690.5454) – (-4706.2685)] = 31.446092
d.f. = 37-36 = 1
=> 1.48446E-07

This is the preferred test, combining power and robustness.

3.2) Identification of sites 

As these tests are significant, we can move to the next step, which is the precise identification of sites under positive selection. If the previous step was not significant, we should not move to this stage.

In your mlc file, under the section of Model M2a and M8 (the only ones that allow positive selection), you will find a section called “Bayes Empirical Bayes (BEB) analysis (Yang, Wong & Nielsen 2005. Mol. Biol. Evol. 22:1107-1118)”. This section contains the list that have a BEB score [Pr(w>1)] higher than 50%. BEB values higher than 95% are indicated by * and sites with values higher than 99% are indicated by **. Sometimes, there is no site detected, which means there is probably a problem in your analysis or dataset if your test is signficant. Sometimes, you will find a lot of sites, which seems worrying, but it just means the average BEB (baseline) is slightly above 50%. The most interesting sites are those with a BEB>95% (* or **).

Bayes Empirical Bayes (BEB) analysis (Yang, Wong & Nielsen 2005. Mol. Biol. Evol. 22:1107-1118)
Positively selected sites (*: P>95%; **: P>99%)
(amino acids refer to 1st sequence: ENSP00000364080)

            Pr(w>1)     post mean +- SE for w
     4 R      0.961*        2.858 +- 0.698
    36 F      0.671         2.150 +- 1.019
    53 L      0.999**       2.956 +- 0.610
    84 D      1.000**       2.957 +- 0.608
    97 G      0.649         2.098 +- 1.018
   112 V      0.969*        2.888 +- 0.695
   114 F      0.999**       2.955 +- 0.611
   115 R      0.743         2.367 +- 1.093
   116 G      1.000**       2.957 +- 0.609
   121 R      0.835         2.593 +- 0.993
   247 R      0.617         2.050 +- 1.110

The column correspondS to the position in the trimmed alignment (i.e. not related to the position in the reference sequence, which is ENSP00000364080, the human). However, the amino acid correspondS exactly to the reference sequence.

One of the site with the strongest BEB value is 84, with 1.000.  Its own dN/dS value is 2.957, with a standard deviation of 0.608. As I said previously, the sites given by CodeML don’t correspond to the human sequence. You can use the following script to extract the real position in the human sequence:

./ HLA_DQB1_subset.cds.mafft.fasta HLA_DQB1_subset.cds.mafft.trimal.cols "ENSP00000364080" 84
=> 84 89 D

=> This site 84 in CodeML (Trimal) correspondS to amino acid site 89 in the human sequence and codes for an aspartic acid.

# In total, we have six sites that are strongly interesting (BEB>95%): 4, 53, 84, 112, 114 and 116. Let’s repeat the same for all these sites:

site_list="4 53 84 112 114 116" 
for site in $site_list; do ./ HLA_DQB1_subset.cds.mafft.fasta HLA_DQB1_subset.cds.mafft.trimal.cols "ENSP00000364080" $site; done

  4   8 R
 53  58 L
 84  89 D
112 117 V
114 119 F
116 121 G

# We can do the same for sites with 50%<BEB<95%:

site_list="36 97 115 121 247"

for site in $site_list; do ./ HLA_DQB1_subset.cds.mafft.fasta HLA_DQB1_subset.cds.mafft.trimal.cols "ENSP00000364080" $site; done

 36  41 F
 97 102 G
115 120 R
121 126 R
247 252 R

# We can also plot the dN/dS value per sites. At the bottom of the rst file "HLA_DQB1_M0M1M2M3M7.rst.txt", extract the last part which looks like this and save as “beb.txt”:

   1 K   0.02943 0.13133 0.20099 0.20092 0.16262 0.11574 0.07524 0.04540 0.02539 0.01292 0.00002 ( 3)  0.395 +-  0.198
   2 A   0.00278 0.02154 0.05527 0.09166 0.12189 0.14109 0.14752 0.14134 0.12323 0.09274 0.06093 ( 7)  0.737 +-  0.535
   3 L   0.00726 0.04637 0.09959 0.13896 0.15621 0.15361 0.13717 0.11292 0.08526 0.05671 0.00595 ( 5)  0.551 +-  0.270
   4 R   0.00000 0.00000 0.00000 0.00003 0.00019 0.00082 0.00248 0.00582 0.01104 0.01731 0.96230 (11)  2.860 +-  0.696
   5 I   0.44686 0.24476 0.14583 0.08007 0.04205 0.02141 0.01061 0.00510 0.00234 0.00099 0.00000 ( 1)  0.168 +-  0.149
   6 P   0.34022 0.22595 0.16343 0.10859 0.06876 0.04207 0.02495 0.01430 0.00780 0.00391 0.00001 ( 1)  0.221 +-  0.187

1st column = position in the trimmed alignment.
2nd column = amino acid from the reference sequence.
3rd to 13th column = BEB score for each class (10 neutral + 1 allowing positive selection).
14th = most likely class.
15th = estimated dN/dS value at this position.
16th = standard deviation for this dN/dS value.

You can see that position 4, the most likely class is 11th (BEB=0.96) with a dN/dS = 2.86+-0.70.

# CodeML output uses a fixed delimitation. To parse it in R, we need to remove the space between bracket and number:

cat beb.txt | perl -pe "s/\( /\(/g" > tmp.txt; mv tmp.txt beb.txt

# Here are some commandS to use in R, to produce the following plot:

if (!require("ggplot2")) {
   install.packages("ggplot2", dependencies = TRUE)

df<-read.table("beb.txt", sep = "")

df$beb <- "No"
df$beb[df$V13 > 0.50] <- "Yes"

p <- ggplot(df, aes(V1, V15))
p + geom_point(aes(colour = factor(beb)))+
    geom_hline(yintercept = 1)+
    scale_color_manual(values = c("black", "red"))+
    labs(x = "Residue position")+
    labs(y = "Selective pressure [dN/dS]")+
    theme(axis.line = element_line(colour = "black"),
    panel.grid.major = element_blank(),
    panel.grid.minor = element_blank(),
    panel.background = element_blank())
ggsave("beb.png", height=3, width=4)

We can see that sites under positive selection represent only a small fraction, and most sites are under strong purifying selection.

4) Visualisation of sites in 3D structure

We found that sites seems randomly distributed according to their residue position. It would be interesting to see if they form a pattern in the 3D structure. Go download the following pdb file 1uvq:


Or download it from the webpage:

Then load it in PyMOL:

pymol 1UVQ.pdb &

# First, define the different molecules to their associated polypeptide chains:

select HLA_DQA1, chain A
select HLA_DQB1, chain B
select peptide,  chain C

# We highlight in cartoon/sticks and colour chains and peptide in different colours:

hide everything
show cartoon, HLA_DQA1
show cartoon, HLA_DQB1
show sticks, peptide
colour white, HLA_DQA1
colour grey60, HLA_DQB1
colour green, peptide

# To spice things up, the site numbering in many PDB files doesn’t correspond to the human sequence. So we have to renumber it:

renumber HLA_DQB1, 35

# Highlight sites with BEB>95%:

select sites_BEB95, HLA_DQB1 and resi 58+89+117+119+121
show spheres, sites_BEB95
colour yellow, sites_BEB95

# Highlight sites with 50%<BEB<95%:

select sites_BEB50, HLA_DQB1 and resi 41+102+120+126+252
show sticks, sites_BEB50
colour yellow, sites_BEB50

=> Most sites under positive selection (in yellow) are exactly in the binding site (in green), facing the target peptide. This example with the MHC has been widely described in the literature [Hugues AL et al. 1988]. To my disappointement, there are not so many exemples that selective pressure, 3D structural and experimental validation:

To have a nice finish as in the figure above, rotate the structure the way you want and type: 

bg_color white
select none
ray 2000
save 1UVQ_positive_sites.png

Et voila! I hope this tutorial was helpful.

Friday, 22 April 2016

Using package manager Homebrew (Mac OSX) for science / computational biology

I am using Mac OSX on a daily use for many reasons.

One reasons is that you can use the classical tools from the Windows world (Microsoft Office, Endnote) that don't exist on Linux. It is important to me as many of my external collaborators are using Windows, so it can be problematic when sharing documents. I don't exclude that I would shift one day to Linux.

The second reason, and I think it was one of the best strategy from Apple, was the transition to Unix in 2001. This greatly improved the compatibility and access to tons of scientific softwares. The installation of unix tools from source is usually done by the classical "./configure; make; sudo make install".

Hopefully, package managers exist to avoid this task, and to keep the system tidy, especially with library dependencies. At the very beginning, when I started my PhD in 2005, I was using Fink. I then switched to MacPort. Last year, I encountered some problems with MacPort and some versions of GCC. Everything was messy and I decided, as I switched to El Capitan from 10.8, to move to Homebrew. So I remove my MacPort installation and installed HomeBrew. And Homebrew is really great!

(Shaun Jackman told me on Twitter that he is maintaining the linux fork of Homebrew, available here: )

Some reasons I like it:
  • Very easy to use.
  • No need to use super-user, as it installs in the user directory.
  • It tries to use already libraries if possible.
  • There is one way for unix-style software and one way for .dmg package (cask).

To install it, you need:
  • MacOSX 10.9 ou +
  •  Xcode (the most recent). [NB: Xcode is not needed anymore. Save a lot of space!]
  • Command Line Tools (Install with command: xcode-select --install)

You can install Homebrew from there:

You just need this command, and homebrew will auto-install itself:

/usr/bin/ruby -e "$(curl -fsSL"

Then it is very easy to use in the Terminal. For example, if your are interested to install "Gimp":

brew search gimp => search for packages that contains the word "gimp".

brew info gimp => describe gimp package, and which dependencies are needed.

brew install gimp => install gimp.

brew list => show everything installed on your computer.

There are other sections in homebrew. For exemple, a category specific to games, or a category specific to science. To include those directly on your homebrew, you can tap them:

brew tap homebrew/games
brew tap homebrew/science

Now, you can install CodeML / PAML:
brew install paml

For example, here is what I have installed from the science category:

brew leaves | grep "science"


You will recognise many useful tools. Nice, isn't it?

However, there are still some tools that cannot be installed with Homebrew:

brew search biopython => No formula found for "biopython".
brew search jalview => No formula found for "jalview".
brew search pagan => No formula found for "pagan".
brew search njplot => No formula found for "njplot".
brew search netphorest => No formula found for "netphorest".
brew search vmd => not the one I search for. :/

(I will update this list once I installed them)

There is also the CASKROOM, which contains other packages, such as .dmg packages.

Same, you can merge with homebrew:
brew tap caskroom/cask

For example, to install Firefox:

brew cask install --force figtree
=> it will install Firefox in your and create a shortcut in /Users/yourname/Applications

If you want to install in /Applications:
brew cask install --appdir="/Applications" --force figtree

=> it will force the creation of the shortcut to /Applications. But I prefer to install in my own Applications folder, it is cleaner.

Here is my short list:

brew cask list | sort


It is easy to maintain and update all your package (except the ones from CASK, I don't know why)

brew doctor => check that everything is ok. It can display lot of warnings, especially if you previously installed programs by hand.

brew update => update your package list definition.

brew upgrade =>  upgrade all your outdated packages

brew upgrade $FORMULA => upgrade only the specific formula (i.e. mrbayes).

brew cleanup
brew cask cleanup

You can run the maintenance in one line:
brew update; brew upgrade; brew cleanup; brew cask cleanup

For the packages from CASK, here is how I update them (with the --force option):

brew cask install --force --srgb --with-cocoa emacs

brew cask install --force java
brew cask install --force arduino
brew cask install --force avogadro
brew cask install --force bittorrent
brew cask install --force cathode
brew cask install --force figtree
brew cask install --force filezilla
brew cask install --force gimp
brew cask install --force grandperspective
brew cask install --force google-earth
brew cask install --force handbrake
brew cask install --force kompozer
brew cask install --force kid3
brew cask install --force mplayer-osx-extended
brew cask install --force rstudio
brew cask install --force scribus
brew cask install --force silverlight
brew cask install --force slack
brew cask install --force teamviewer
brew cask install --force thunderbird
brew cask install --force unetbootin
brew cask install --force unrarx
brew cask install --force vlc
brew cask install --force vox

brew cask install --force --appdir="/Applications" firefox
brew cask install --force --appdir="/Applications" libreoffice
brew cask install --force --appdir="/Applications" skype

You can also do it in one line like this:
brew cask list | xargs brew cask install --force

That's it!

I strongly recommend to use Homebrew, especially if you start from a fresh system. Homebrew community is very active and new packages are put every weeks.


PS: other interesting blog posts:

Wednesday, 20 April 2016

Tutorial: estimating the stability effect of a mutation with FoldX - release 4

Note: this is the exact same tutorial as published on 25 March 2015, except it is now based on FoldX4. The reason is that we have to update the FoldX license every year (from 31st of December to 1st of January). This means that if you ran jobs over Christmas Holidays, the jobs are killed at New Year's Eve. And the problem is they shifted from release 3 to 4, which was accompanied by a complete change in the interface. Which is good as it is  much simpler now.

They also made some changes in the energy computation by adding some parameters, but this is not documented yet. So if you started your project using FoldX3, and you need again to use FoldX, it is might be better to re-run FoldX4 on your whole dataset, for coherence reasons.

Finally, as usual, if you have any questions or comments, your are welcome!



Here is a brief tutorial on how to use FoldX to estimate the stability effect of a mutation in a 3D structure. The stability (ΔG) of a protein is defined by the free energy, which is express in kcal/mol. The lower it is, the more stable it is. ΔΔG is difference in free energy (in kcal/mol) between a wild-type and mutant. A mutation that brings energy (ΔΔG > 0 kcal/mol) will destabilise the structure, while a mutation that remove energy (ΔΔG < 0 kcal/mol) will stabilise the structure. A common threshold is to say that a mutation has a significant effect if ΔΔG is >1 kcal/mol, which roughly corresponds to the energy of a single hydrogen bond.

A good way to compute the free energy is to use molecular dynamics. Main problem: it can be very time-consuming.

FoldX uses an empirical method to estimate the stability effect of a mutation. The executable is available here:

You need to register, but it is free for Academics.

NB: I strongly encourage to read the manual (before or in parallel of this tutorial).
[NB2 (20/04/2016): I haven't found a proper manual for FoldX4. Only html pages per command)]


FoldX was used in many studies, i.e.:
Tokuriki N, Stricher F, Serrano L, Tawfik DS. How protein stability and new functions trade off. PLoS Comput Biol. 2008 Feb 29;4(2):e1000002

Dasmeh P, Serohijos AW, Kepp KP, Shakhnovich EI. Positively selected sites in cetacean myoglobins contribute to protein stability. PLoS Comput Biol. 2013;9(3):e1002929.
And I personally used it in three of my studies:
Studer RA, Christin PA, Williams MA, Orengo CA. Stability-activity tradeoffs constrain the adaptive evolution of RubisCO. Proc Natl Acad Sci U S A. 2014 Feb 11;111(6):2223-8.
Studer RA, Opperdoes FR, Nicolaes GA, Mulder AB, Mulder R. Understanding the functional difference between growth arrest-specific protein 6 and protein S: an evolutionary approach. Open Biol. 2014 Oct;4(10). pii: 140121.

Rallapalli PM, Orengo CA, Studer RA, Perkins SJ. Positive selection during the evolution of the blood coagulation factors in the context of their disease-causing mutations. Mol Biol Evol. 2014 Nov;31(11):3040-56.

Tutorial by example:

The structure is a bacterial cytochrome P450 (PDB:4TVF). You can download its PDB file (4TVF.pdb) from here:

Or directly with wget:

We would like to test the stability of mutation at position 280, from a leucine (L) to an aspartic acid (D). Here is the original structure, with Leu280 in green, and residues around 6Å in yellow:

FoldX works in two steps:

1) Repair the structure.

There are frequent problems in PDB structures, like steric clashes. FoldX will try to fix them and lower the global energy (ΔG). The "RepairPDB" command is better than the "Optimize" command. Here is how to launch FoldX:
foldx --command=RepairPDB --pdb=4TVF.pdb --ionStrength=0.05 --pH=7 --water=CRYSTAL --vdwDesign=2 --outPDB=true --pdbHydrogens=false

We indicate which PDB file it needs to use, that we want to repair it (RepairPDB), that it will use water and metal bonds from the PDB file (--water=CRYSTAL) and that we want a PDB as output (--outPDB=true). All other parameter are by default.

This process is quite long (around 10 minutes). Here is the result (the original structure is now in white, while the repaired structure is in yellow/green):

 We can see that some side chains have slightly moved (in particular Phe16).

The starting free energy ΔG was +73.22 kcal/mol, and it was lowered to -46.97 kcal/mol, which is now stable (remember that a "+" sign means unstable while a "-" sign means stable).

Once it's finished, it will produce a file named "4TVF_Repair.pdb", which you will use in the next step.

2) Perform the mutation

The mutation itself is performed by the BuildModel function. There are other methods, but the BuildModel is apparently the most robust (I said apparently, but there are no proper benchmarks against the other method PositionScan or PSSM). You also need to specify the mutation in a separate file "individual_list.txt". Here is the file "individual_list.txt" (yes, just one line):
It contains the starting amino acid (L), the chain (A), the position (280) and the amino acid you want at the end (D). One line correspond to one mutant. It means you can mutate many residues at the same per line (mutant) and also  produce different mutants by different numbers of lines.

In the following command line, you will see that is 4TVF_Repair.pdb and not 4TVF.pdb that is mutated. You will also notice "--numberOfRuns=3". This is because some residues can have many rotamers and could have some convergence problems. You may to increase this values to 5 or 10, in case you are mutating long residues (i.e. Arginine) that have many rotamers.

You can run it by:
foldx --command=BuildModel --pdb=4TVF_Repair.pdb --mutant-file=individual_list.txt --ionStrength=0.05 --pH=7 --water=CRYSTAL --vdwDesign=2 --outPDB=true --pdbHydrogens=false --numberOfRuns=3
It is much faster this time (i.e. a few seconds) and will produce many files.

FoldX will first mutate the target residue (L) to itself (L) and move it as well as all neighbouring side chains multiple times. We can see that Leu280 (green) was rotated:

=> This is will give the free energy of the wild-type (let's call it ΔGwt).

Then, it will mutate the target residue (L) to the desired mutant (D) and move it as well as all neighbouring side chains multiple times. We can see that Leu280 is mutated to Asp280 (see the two oxygen atoms in red):

=> This is will give the free energy of the mutant (let's call it ΔGmut).

The difference in free energy (ΔΔG) is given by ΔGmut-ΔGwt.

In the file "Raw_4TVF_Repair.fxout", you can retrieve the energy of the three runs for both WT and Mutant.

  • ΔGmut = 4TVF_Repair_1.pdb = -42.7114 kcal/mol
  • ΔGwt = WT_4TVF_Repair_1_0.pdb = -47.6248 kcal/mol
  • => ΔΔG = ΔGmut-ΔGwt = (-42.7114)-(-47.6248) = +4.9134 kcal/mol

One file contains the average difference over all runs: "Average_4TVF_Repair.fxout".

(You will notice that the difference in free energy ΔΔG is +4.85 kcal/mol [+- 0.06 kcal/mol]).

=> It means the mutation L280D is highly destabilising (positive value, and much above 1.0 kcal/mol).

PS: A way to define the threshold is to use the standard deviation (SD) by multiple:

The reported accuracy of FoldX is 0.46 kcal/mol (i.e., the SD of the difference
between ΔΔGs calculated by FoldX and the experimental values). We can bin the ΔΔG values into seven categories:
  1. highly stabilising (ΔΔG < −1.84 kcal/mol); 
  2. stabilising (−1.84 kcal/mol ≤ ΔΔG < −0.92 kcal/mol); 
  3. slightly stabilising (−0.92 kcal/mol ≤ ΔΔG < −0.46 kcal/mol); 
  4. neutral (−0.46 kcal/mol < ΔΔG ≤ +0.46 kcal/mol);
  5. slightly destabilising (+0.46 kcal/mol < ΔΔG ≤ +0.92 kcal/mol);
  6. destabilising (+0.92 kcal/mol < ΔΔG ≤ +1.84 kcal/mol);
  7. highly destabilising (ΔΔG > +1.84 kcal/mol).

Friday, 27 November 2015

Cleaning Python scripts with Pylint and GNU/Expand

Python is a wonderful programming language. But it can be quite syntax error tolerant. For example, the indentation is really important, but you can use either tabulations or spaces. You can also mix them in Python 2 (forbidden in Python 3).

These days, I am trying to stick to the rules (I am getting old). For that, there is the PEP8 style guide:

For example, the official format for identation is 4 spaces per indentation level.

I found Pylint, which check your code for such errors:

It gives a list of all erors line by line, and a global score.

Let's give a try. Here is a code to print fibonacci numbers, which you can download it from here:

Save it as and launch it:

python 9
0 1
1 1
1 2
2 3
3 5
5 8
8 13
13 21
(9, 21)

It works! Great!

But let's have a look with Pylint:

Global evaluation
Your code has been rated at 0.83/10

Ouch, this hurts!

Many problems apparently:

************* Module fibo
C:  9, 0: Exactly one space required after comma
    i,j = 1,0
     ^ (bad-whitespace)
C:  9, 0: Exactly one space required after comma
    i,j = 1,0
           ^ (bad-whitespace)
C: 11, 0: Exactly one space required after comma
    for k in range(1,n + 1):
                    ^ (bad-whitespace)
W: 12, 0: Found indentation with tabs instead of spaces (mixed-indentation)
C: 12, 0: Exactly one space required after comma
        i,j = j, i + j
      ^ (bad-whitespace)
C: 13, 0: Trailing whitespace (trailing-whitespace)
W: 13, 0: Found indentation with tabs instead of spaces (mixed-indentation)
C: 13, 0: Exactly one space required after comma
        print i,j
            ^ (bad-whitespace)
C: 15, 0: Trailing whitespace (trailing-whitespace)
W: 11, 8: Unused variable 'k' (unused-variable)
W:  3, 0: Unused import os (unused-import)

First problem, there is a mix and match with tab and space. It is easy to manually correct a small file, it can be tough with a very big file. GNU/UNIX provides some tools for that:

1) command cat
With Linux: cat -A
With MacOSX: cat -e -t
With Windows: guru meditation error, sorry.

=> space will be displayed as space, but tab will now be displayed as "^I". Easier to see any problem.

2) command expand
expand -t 4 > tmp.txt  # Change all tab to 4-space
mv tmp.txt  # move back the filename.
chmod +x  # make it executable again.

And now try again
cat -e -t

Global evaluation
Your code has been rated at 2.50/10 (previous run: 0.83/10, +1.67)

Good, there is some progresses.

C:  9, 0: Exactly one space required after comma
    i,j = 1,0
     ^ (bad-whitespace)
C:  9, 0: Exactly one space required after comma
    i,j = 1,0
           ^ (bad-whitespace)
C: 11, 0: Exactly one space required after comma
    for k in range(1,n + 1):
                    ^ (bad-whitespace)
C: 12, 0: Exactly one space required after comma
        i,j = j, i + j
         ^ (bad-whitespace)
C: 13, 0: Trailing whitespace (trailing-whitespace)
C: 13, 0: Exactly one space required after comma
        print i,j
               ^ (bad-whitespace)
C: 15, 0: Trailing whitespace (trailing-whitespace)
W: 11, 8: Unused variable 'k' (unused-variable)
W:  3, 0: Unused import os (unused-import)

The rest of the code is more styling errors:
- Add space after the comma.
- Remove import os
- Remove trailing white-space (visible with cat -A / cat -e -t)

And now:
Your code has been rated at 9.09/10.
Much better!

The last problem is an unused variable k. We can let it like this or change it with a more elegant way (i.e. with a while loop).

So, recommandation:

- Do your code properly since the begining.
- Use Pylint to identify errors.
- Correct the errors.
- Have a look to see if your code is not changed (i.e. identation shifted).
- Run your code again to check if it is working.

PS: It looks like Emacs 25.0 is properly handling the tab key, aka adding 4-spaces in the file instead of a tab.