Unverified Commit c5581f23 authored by Bertrand  NÉRON's avatar Bertrand NÉRON
Browse files

Merge branch 'master' of gitlab.pasteur.fr:hub-courses/python_one_week_4_biologists_solutions

parents 2763f420 e986fb63
Pipeline #58155 passed with stages
in 13 seconds
.. sectnum::
:start: 5
.. _Collection_Data_types:
*********************
......@@ -16,27 +16,27 @@ Exercise
| Draw the representation in memory of the following expressions.
| what is the data type of each object?
::
::
x = [1, 2, 3, 4]
y = x[1]
y = 3.14
x[1] = 'foo'
.. figure:: _static/figs/list_1.png
:width: 400px
:alt: set
:figclass: align-center
::
x = [1, 2, 3, 4]
x += [5, 6]
.. figure:: _static/figs/augmented_assignment_list.png
.. figure:: _static/figs/augmented_assignment_list.png
:width: 400px
:alt: set
:figclass: align-center
:figclass: align-center
::
......@@ -46,121 +46,132 @@ Exercise
>>> x += [5,6]
>>> id(x)
139950507563632
With mutable object like ``list`` when we mutate the object the state of the object is modified.
With mutable object like ``list``, when we mutate the object, the state of the object is modified.
But the reference to the object is still unchanged.
So in this example we have two ways to access to the list [1,2] if we modify the state of the list itself.
but not the references to this object, then the 2 variables x and y still reference the list containing
[1,2,3,4].
compare with the exercise on string and integers:
Comparison with the exercise on strings and integers:
Since list are mutable, when ``+=`` is used the original list object is modified, so no rebinding of *x* is necessary.
We can observe this using *id()* which give the memory address of an object. This address does not change after the
Since lists are mutable, when ``+=`` is used, the original list object is modified, so no rebinding of *x* is necessary.
We can observe this using *id()* which gives the memory address of an object. This address does not change after the
``+=`` operation.
.. note::
even the results is the same there is a subtelty to use augmented operator.
in ``a operator= b`` python looks up ``a``s value only once, so it is potentially faster
than the ``a = a operator b``.
Even the results are the same, there is a subtelty to use augmented operator.
In ``a operator= b`` opeeration, Python looks up ``a``'s value only once, so it is potentially faster
than the ``a = a operator b`` operation.
compare ::
Compare ::
x = 3
y = x
y += 3
x = ?
y = ?
.. figure:: _static/figs/augmented_assignment_int2.png
.. figure:: _static/figs/augmented_assignment_int2.png
:width: 400px
:alt: augmented_assignment
:figclass: align-center
:figclass: align-center
and ::
x = [1,2]
y = x
y += [3,4]
x = ?
y = ?
y = ?
.. figure:: _static/figs/augmented_assignment_list2.png
.. figure:: _static/figs/augmented_assignment_list2.png
:width: 400px
:alt: list extend
:figclass: align-center
:figclass: align-center
In this example we have two ways to access to the list ``[1, 2]``.
If we modify the state of the list itself, but not the references to this object, then the two variables ``x`` and ``y`` still reference the list containing
``[1, 2, 3, 4]``.
Exercise
--------
wihout using python shell, what is the results of the following statements:
.. note::
sum is a function which return the sum of each elements of a list.
::
.. note::
``sum`` is a function that returns the sum of all the elements of a list.
Wihout using the Python shell, tell what are the effects of the following statements::
x = [1, 2, 3, 4]
x[3] = -4 # what is the value of x now ?
y = sum(x)/len(x) #what is the value of y ? why ?
y = 0.5
.. warning::
x[3] = -4 # What is the value of x now?
y = sum(x) / len(x) # What is the value of y? Why?
Solution (using the Python shell ;) )::
>>> x = [1, 2, 3, 4]
>>> x[3] = -4
>>> x
[1, 2, 3, -4]
>>> y = sum(x) / len(x)
>>> y
0.5
Here, we compute the mean of the values contained in the list ``x``, after having changed its last element to -4.
In python2 the result is ::
.. .. warning::
y = 0
.. In python2 the result is ::
because sum(x) is an integer, len(x) is also an integer so in python2.x the result is an integer,
.. y = 0
.. because sum(x) is an integer, len(x) is also an integer so in python2.x the result is an integer,
all the digits after the periods are discarded.
Exercise
--------
Draw the representation in memory of the following expressions. ::
Draw the representation in memory of the ``x`` and ``y`` variables when the following code is executed::
x = [1, ['a','b','c'], 3, 4]
x = [1, ['a', 'b', 'c'], 3, 4]
y = x[1]
y[2] = 'z'
# what is the value of x ?
# What is the value of x?
.. figure:: _static/figs/list_2-1.png
:width: 400px
:alt: set
:figclass: align-center
.. container:: clearer
.. image :: _static/figs/spacer.png
When we execute *y = x[1]*, we create ``y`` wich reference the list ``['a', 'b', 'c']``.
When we execute *y = x[1]*, we create ``y`` which references the list ``['a', 'b', 'c']``.
This list has 2 references on it: ``y`` and ``x[1]`` .
.. figure:: _static/figs/list_2-2.png
:width: 400px
:alt: set
:figclass: align-center
.. container:: clearer
.. image :: _static/figs/spacer.png
This object is a list so it is a mutable object.
So we can access **and** modify it by the two ways ``y`` or ``x[1]`` ::
x = [1, ['a','b','z'], 3, 4]
Exercise
--------
......@@ -177,17 +188,23 @@ or ::
Exercise
--------
generate a list containing all codons.
.. note::
A codon is a triplet of nucleotides.
A nucleotide can be one of the four letters A, C, G, T
Write a function that returns a list containing strings representing all possible codons.
Write the pseudocode before proposing an implementation.
pseudocode:
"""""""""""
| *function all_codons()*
| *all_codons <- empty list*
| *let varying the first base*
| *for each first base let varying the second base*
| *for each combination first base, second base let varying the third base*
| *let vary the first base*
| *for each first base let vary the second base*
| *for each combination first base, second base let vary the third base*
| *add the concatenation base 1 base 2 base 3 to all_codons*
| *return all_codons*
......@@ -201,14 +218,14 @@ first implementation:
python -i codons.py
>>> codons = all_codons()
:download:`codons.py <_static/code/codons.py>` .
:download:`codons.py <_static/code/codons.py>`.
second implementation:
""""""""""""""""""""""
Mathematically speaking the generation of all codons can be the cartesian product
between 3 vectors 'acgt'.
between 3 vectors 'acgt'.
In python there is a function to do that in ``itertools module``: `https://docs.python.org/3/library/itertools.html#itertools.product <product>`_
......@@ -220,14 +237,14 @@ In python there is a function to do that in ``itertools module``: `https://docs.
python -i codons.py
>>> codons = all_codons()
:download:`codons_itertools.py <_static/code/codons_itertools.py>` .
Exercise
--------
From a list return a new list without any duplicate, regardless of the order of items.
From a list return a new list without any duplicate, regardless of the order of items.
For example: ::
>>> l = [5,2,3,2,2,3,5,1]
......@@ -268,7 +285,7 @@ If we plan to use ``uniqify`` with large list we should find a better algorithm.
In the specification we can read that uniqify can work *regardless the order of the resulting list*.
So we can use the specificity of set ::
>>> list(set(l))
......@@ -277,18 +294,18 @@ Exercise
We need to compute the occurrence of all kmers of a given length present in a sequence.
Below we propose 2 algorithms.
Below we propose 2 algorithms.
pseudo code 1
"""""""""""""
| *function get_kmer_occurences(seq, kmer_len)*
| *all_kmers <- generate all possible kmer of kmer_len*
| *occurences <- 0*
| *occurences <- 0*
| *for each kmer in all_kmers*
| *count occurence of kmer*
| *store occurence*
pseudo code 2
"""""""""""""
......@@ -297,29 +314,29 @@ pseudo code 2
| *from i = 0 to sequence length - kmer_len*
| *kmer <- kmer startin at pos i im sequence*
| *increase by of occurence of kmer*
.. note::
Computer scientists typically measure an algorithm’s efficiency in terms of its worst-case running time,
which is the largest amount of time an algorithm can take given the most difficult input of a fixed size.
The advantage to considering the worst case running time is that we are guaranteed that our algorithm
Computer scientists typically measure an algorithm’s efficiency in terms of its worst-case running time,
which is the largest amount of time an algorithm can take given the most difficult input of a fixed size.
The advantage to considering the worst case running time is that we are guaranteed that our algorithm
will never behave worse than our worst-case estimate.
Big-O notation compactly describes the running time of an algorithm.
For example, if your algorithm for sorting an array of n numbers takes roughly n2 operations for the most difficult dataset,
then we say that the running time of your algorithm is O(n2). In reality, depending on your implementation, it may be use any number of operations,
such as 1.5n2, n2 + n + 2, or 0.5n2 + 1; all these algorithms are O(n2) because big-O notation only cares about the term that grows the fastest with
respect to the size of the input. This is because as n grows very large, the difference in behavior between two O(n2) functions,
like 999 · n2 and n2 + 3n + 9999999, is negligible when compared to the behavior of functions from different classes,
Big-O notation compactly describes the running time of an algorithm.
For example, if your algorithm for sorting an array of n numbers takes roughly n2 operations for the most difficult dataset,
then we say that the running time of your algorithm is O(n2). In reality, depending on your implementation, it may be use any number of operations,
such as 1.5n2, n2 + n + 2, or 0.5n2 + 1; all these algorithms are O(n2) because big-O notation only cares about the term that grows the fastest with
respect to the size of the input. This is because as n grows very large, the difference in behavior between two O(n2) functions,
like 999 · n2 and n2 + 3n + 9999999, is negligible when compared to the behavior of functions from different classes,
say O(n2) and O(n6). Of course, we would prefer an algorithm requiring 1/2 · n2 steps to an algorithm requiring 1000 · n2 steps.
When we write that the running time of an algorithm is O(n2), we technically mean that it does not grow faster than a function with a
leading term of c · n2, for some constant c. Formally, a function f(n) is Big-O of function g(n), or O(g(n)), when f(n) <= c · g(n) for some
When we write that the running time of an algorithm is O(n2), we technically mean that it does not grow faster than a function with a
leading term of c · n2, for some constant c. Formally, a function f(n) is Big-O of function g(n), or O(g(n)), when f(n) <= c · g(n) for some
constant c and sufficiently large n.
For more on Big-O notation, see A `http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/Beginner's <Guide to Big-O Notation>`_.
Compare the pseudocode of each of them and implement the fastest one. ::
......@@ -334,6 +351,7 @@ Compare the pseudocode of each of them and implement the fastest one. ::
acggcaacatggctggccagtgggctctgagaggagaaagtccagtggatgctcttggtctggttcgtgagcgcaacaca"""
<<<<<<< HEAD
In the first algorithm.
| we first compute all kmers we generate 4\ :sup:`kmer length`
......@@ -341,6 +359,15 @@ In the first algorithm.
| so for each kmer we read all the sequence so the algorithm is in O( 4\ :sup:`kmer length` * ``sequence length``)
| In the second algorithm we read the sequence only once
=======
In the first alogrithm.
| we first compute all kmers we generate 4\ :sup:`kmer length`
| then we count the occurence of each kmer in the sequence
| so for each kmer we read all the sequence so the algorith is in O( 4\ :sup:`kmer length` * ``sequence length``)
| In the secon algorithm we read the sequence only once
>>>>>>> e986fb63db27fe063adb907bfb916dbb79c5db9b
| So the algorithm is in O(sequence length)
......@@ -369,9 +396,9 @@ Compute the 6 mers occurences of the sequence above, and print each 6mer and it'
aacttc .. 1
gcaact .. 1
aaatat .. 2
:download:`kmer.py <_static/code/kmer.py>` .
:download:`kmer.py <_static/code/kmer.py>`.
bonus:
......@@ -406,9 +433,9 @@ Print the kmers by ordered by occurences.
aggaaa .. 4
ttctga .. 3
ccagtg .. 3
:download:`kmer_2.py <_static/code/kmer_2.py>` .
:download:`kmer_2.py <_static/code/kmer_2.py>`.
Exercise
......@@ -438,10 +465,10 @@ pseudocode:
>>> seq = 'acggcaacatggctggccagtgggctctgagaggagaaagtccagtggatgctcttggtctggttcgtgagcgcaacaca'
>>> print rev_comp(seq)
tgtgttgcgctcacgaaccagaccaagagcatccactggactttctcctctcagagcccactggccagccatgttgccgt
:download:`rev_comp.py <_static/code/rev_comp.py>` .
:download:`rev_comp.py <_static/code/rev_comp.py>`.
other solution
""""""""""""""
......@@ -454,9 +481,9 @@ to change, the second string the corresponding characters in the new string.
Thus the two strings **must** have the same length. The correspondance between
the characters to change and their new values is made in function of their position.
the first character of the first string will be replaced by the first character of the second string,
the second character of the first string will be replaced by the second character of the second string, on so on.
the second character of the first string will be replaced by the second character of the second string, on so on.
So we can write the reverse complement without loop.
.. literalinclude:: _static/code/rev_comp2.py
:linenos:
:language: python
......@@ -467,7 +494,7 @@ So we can write the reverse complement without loop.
>>> seq = 'acggcaacatggctggccagtgggctctgagaggagaaagtccagtggatgctcttggtctggttcgtgagcgcaacaca'
>>> print rev_comp(seq)
tgtgttgcgctcacgaaccagaccaagagcatccactggactttctcctctcagagcccactggccagccatgttgccgt
:download:`rev_comp2.py <_static/code/rev_comp2.py>` .
Exercise
......@@ -477,7 +504,7 @@ let the following enzymes collection:
We decide to implement enzymes as tuple with the following structure
("name", "comment", "sequence", "cut", "end")
::
ecor1 = ("EcoRI", "Ecoli restriction enzime I", "gaattc", 1, "sticky")
ecor5 = ("EcoRV", "Ecoli restriction enzime V", "gatatc", 3, "blunt")
......@@ -513,22 +540,22 @@ and the 2 dna fragments: ::
#. use the functions above to compute the enzymes which cut the dna_1
apply the same functions to compute the enzymes which cut the dna_2
compute the difference between the enzymes which cut the dna_1 and enzymes which cut the dna_2
.. literalinclude:: _static/code/enzyme_1.py
:linenos:
:language: python
::
from enzyme_1 import *
enzymes = [ecor1, ecor5, bamh1, hind3, taq1, not1, sau3a1, hae3, sma1]
dna_1 = one_line(dna_1)
dans_2 = one_line(dna_2)
enz_1 = enz_filter(enzymes, dna_1)
enz_2 = enz_filter(enzymes, dna_2)
enz_2 = enz_filter(enzymes, dna_2)
enz1_only = set(enz_1) - set(enz_2)
:download:`enzymes_1.py <_static/code/enzyme_1.py>` .
:download:`enzymes_1.py <_static/code/enzyme_1.py>`.
with this algorithm we find if an enzyme cut the dna but we cannot find all cuts in the dna for an enzyme. ::
......@@ -569,7 +596,7 @@ The code must be adapted as below
:linenos:
:language: python
:download:`enzymes_1_namedtuple.py <_static/code/enzyme_1_namedtuple.py>` .
:download:`enzymes_1_namedtuple.py <_static/code/enzyme_1_namedtuple.py>`.
Exercise
--------
......@@ -577,7 +604,7 @@ Exercise
given the following dict : ::
d = {1 : 'a', 2 : 'b', 3 : 'c' , 4 : 'd'}
We want obtain a new dict with the keys and the values inverted so we will obtain: ::
inverted_d {'a': 1, 'c': 3, 'b': 2, 'd': 4}
......@@ -587,14 +614,14 @@ solution ::
inverted_d = {}
for key in d.keys():
inverted_d[d[key]] = key
solution ::
inverted_d = {}
for key, value in d.items():
inverted_d[value] = key
solution ::
inverted_d = {v : k for k, v in d.items()}
......@@ -167,20 +167,21 @@ create a representation in fasta format of following sequence :
TFKFYMPKKATELKHLQCLEEELKPLEEVLNLAQSKNFHLRPRDLISNINVIVLELKGSE
TTFMCEYADETATIVEFLNRWITFCQSIISTLT"""
>>> s = name + comment + '\n' + sequence
>>> s = ">" + name + " " + comment + '\n' + sequence
or
>>> s = "{name} {comment} \n{sequence}".format(id= id, comment = comment, sequence = sequence)
>>> s = ">{name} {comment}\n{sequence}".format(id=id, comment=comment, sequence=sequence)
or
>>> s = f""{name} {comment} \n{sequence}"
>>> s = f">{name} {comment}\n{sequence}"
Exercise
--------
For the following exercise use the python file :download:`sv40 in fasta <_static/code/sv40_file.py>` which is a python file with the sequence of sv40 in fasta format
already embeded, and use python -i sv40_file.py to work.
how long is the sv40 in bp?
How long is the sv40 in bp?
Hint : the fasta header is 61bp long.
(http://www.ncbi.nlm.nih.gov/nuccore/J02400.1)
......@@ -202,8 +203,8 @@ pseudocode:
:linenos:
:language: python
:download:`fasta_to_one_line.py <_static/code/fasta_to_one_line.py>` .
:download:`fasta_to_one_line.py <_static/code/fasta_to_one_line.py>`.
::
......@@ -211,18 +212,20 @@ pseudocode:
>>> import sv40_file
>>> import fasta_to_one_line
>>>
>>> sv40_seq = fasta_to_one_line(sv40_file.sv40_fasta)
>>> sv40_seq = fasta_to_one_line(sv40_file.sv40_fasta)
>>> print len(sv40_seq)
5243
Is that the following enzymes:
Consider the following restriction enzymes:
* BamHI (ggatcc),
* EcorI (gaattc),
* HindIII (aagctt),
* SmaI (cccggg)
* BamHI (ggatcc)
* EcorI (gaattc)
* HindIII (aagctt)
* SmaI (cccggg)
have recogition sites in sv40 (just answer by True or False)? ::
For each of them, tell whether it has recogition sites in sv40 (just answer by True or False).
::
>>> "ggatcc".upper() in sv40_sequence
True
......@@ -233,13 +236,16 @@ have recogition sites in sv40 (just answer by True or False)? ::
>>> "cccggg".upper() in sv40_sequence
False
for the enzymes which have a recognition site can you give their positions? ::
For the enzymes which have a recognition site can you give their positions?
::
>>> sv40_sequence = sv40_sequence.lower()
>>> sv40_sequence.find("ggatcc")
2532
>>> # remind the string are numbered from 0
>>> 2532 + 1 = 2533
>>> 2532 + 1
2533
>>> # the recognition motif of BamHI start at 2533
>>> sv40_sequence.find("gaattc")
1781
......@@ -247,37 +253,51 @@ for the enzymes which have a recognition site can you give their positions? ::
>>> sv40_sequence.find("aagctt")
1045
>>> # HindIII -> 1046
is there only one site in sv40 per enzyme?
The ``find`` method give the index of the first occurrence or -1 if the substring is not found.
So we can not determine the occurrences of a site only with the find method.
Is there only one site in sv40 per enzyme?
The ``find`` method gives the index of the first occurrence or -1 if the substring is not found.
So we can not determine the number of occurrences of a site only with the ``find`` method.
We can know how many sites are present with the ``count`` method.
We will see how to determine the site of all occurrences when we learn looping and conditions.
::
>>> sv40_seq.count("ggatcc")
1
>>> sv40_seq.count("gaattc")
1
>>> sv40_seq.count("aagctt")
6
>>> sv40_seq.count("cccggg")
0
We will see how to determine all occurrences of restriction sites when we learn looping and conditions.
Exercise
--------
We want to perform a PCR on sv40, can you give the length and the sequence of the amplicon?
We want to perform a PCR on sv40. Can you give the length and the sequence of the amplicon?
Write a function which have 3 parameters ``sequence``, ``primer_1`` and ``primer_2``
Write a function which has 3 parameters ``sequence``, ``primer_1`` and ``primer_2`` and returns the amplicon length.
* *We consider only the cases where primer_1 and primer_2 are present in sequence*
* *to simplify the exercise, the 2 primers can be read directly on the sv40 sequence.*
* *We consider only the cases where primer_1 and primer_2 are present in the sequence.*
* *To simplify the exercise, the 2 primers can be read directly in the sv40 sequence (i.e. no need to reverse-complement).*
test you algorithm with the following primers
Test you algorithm with the following primers:
| primer_1 : 5' CGGGACTATGGTTGCTGACT 3'
| primer_2 : 5' TCTTTCCGCCTCAGAAGGTA 3'
Write the pseudocode before to implement it.
Write the function in pseudocode before implementing it.
| *function amplicon_len(sequence primer_1, primer_2)*
| *pos_1 <- find position of primer_1 in sequence*
| *pos_2 <- find position of primer_2 in sequence*
| *amplicon length <- pos_2 + length(primer_2) - pos_1*
| *return amplicon length*
| *return amplicon length*