Commit 4d17374f authored by Blaise Li's avatar Blaise Li
Browse files

Move comment to the correct place.

parent 561097d6
Pipeline #58079 passed with stages
in 12 seconds
.. sectnum::
:start: 5
.. _Collection_Data_types:
*********************
......@@ -16,27 +16,27 @@ Exercise
| Draw the representation in memory of the following expressions.
| what is the data type of each object?
::
::
x = [1, 2, 3, 4]
y = x[1]
y = 3.14
x[1] = 'foo'
.. figure:: _static/figs/list_1.png
:width: 400px
:alt: set
:figclass: align-center
::
x = [1, 2, 3, 4]
x += [5, 6]
.. figure:: _static/figs/augmented_assignment_list.png
.. figure:: _static/figs/augmented_assignment_list.png
:width: 400px
:alt: set
:figclass: align-center
:figclass: align-center
::
......@@ -46,70 +46,71 @@ Exercise
>>> x += [5,6]
>>> id(x)
139950507563632
With mutable object like ``list`` when we mutate the object the state of the object is modified.
With mutable object like ``list``, when we mutate the object, the state of the object is modified.
But the reference to the object is still unchanged.
So in this example we have two ways to access to the list [1,2] if we modify the state of the list itself.
but not the references to this object, then the 2 variables x and y still reference the list containing
[1,2,3,4].
compare with the exercise on string and integers:
Comparison with the exercise on strings and integers:
Since list are mutable, when ``+=`` is used the original list object is modified, so no rebinding of *x* is necessary.
We can observe this using *id()* which give the memory address of an object. This address does not change after the
Since lists are mutable, when ``+=`` is used, the original list object is modified, so no rebinding of *x* is necessary.
We can observe this using *id()* which gives the memory address of an object. This address does not change after the
``+=`` operation.
.. note::
even the results is the same there is a subtelty to use augmented operator.
in ``a operator= b`` python looks up ``a``s value only once, so it is potentially faster
than the ``a = a operator b``.
Even the results are the same, there is a subtelty to use augmented operator.
In ``a operator= b`` opeeration, Python looks up ``a``'s value only once, so it is potentially faster
than the ``a = a operator b`` operation.
compare ::
Compare ::
x = 3
y = x
y += 3
x = ?
y = ?
.. figure:: _static/figs/augmented_assignment_int2.png
.. figure:: _static/figs/augmented_assignment_int2.png
:width: 400px
:alt: augmented_assignment
:figclass: align-center
:figclass: align-center
and ::
x = [1,2]
y = x
y += [3,4]
x = ?
y = ?
y = ?
.. figure:: _static/figs/augmented_assignment_list2.png
.. figure:: _static/figs/augmented_assignment_list2.png
:width: 400px
:alt: list extend
:figclass: align-center
:figclass: align-center
In this example we have two ways to access to the list ``[1, 2]``.
If we modify the state of the list itself, but not the references to this object, then the two variables ``x`` and ``y`` still reference the list containing
``[1, 2, 3, 4]``.
Exercise
--------
wihout using python shell, what is the results of the following statements:
.. note::
wihout using python shell, what is the results of the following statements:
.. note::
sum is a function which return the sum of each elements of a list.
::
x = [1, 2, 3, 4]
x[3] = -4 # what is the value of x now ?
y = sum(x)/len(x) #what is the value of y ? why ?
y = 0.5
.. warning::
......@@ -130,37 +131,37 @@ Draw the representation in memory of the following expressions. ::
y = x[1]
y[2] = 'z'
# what is the value of x ?
.. figure:: _static/figs/list_2-1.png
:width: 400px
:alt: set
:figclass: align-center
.. container:: clearer
.. image :: _static/figs/spacer.png
When we execute *y = x[1]*, we create ``y`` wich reference the list ``['a', 'b', 'c']``.
This list has 2 references on it: ``y`` and ``x[1]`` .
.. figure:: _static/figs/list_2-2.png
:width: 400px
:alt: set
:figclass: align-center
.. container:: clearer
.. image :: _static/figs/spacer.png
This object is a list so it is a mutable object.
So we can access **and** modify it by the two ways ``y`` or ``x[1]`` ::
x = [1, ['a','b','z'], 3, 4]
Exercise
--------
......@@ -170,12 +171,12 @@ from the list l = [1, 2, 3, 4, 5, 6, 7, 8, 9] generate 2 lists l1 containing all
l1 = l[::2]
l2 = l[1::2]
Exercise
--------
generate a list containing all codons.
pseudocode:
"""""""""""
......@@ -197,14 +198,14 @@ first implementation:
python -i codons.py
>>> codons = all_codons()
:download:`codons.py <_static/code/codons.py>` .
:download:`codons.py <_static/code/codons.py>`.
second implementation:
""""""""""""""""""""""
Mathematically speaking the generation of all codons can be the cartesian product
between 3 vectors 'acgt'.
between 3 vectors 'acgt'.
In python there is a function to do that in ``itertools module``: `https://docs.python.org/3/library/itertools.html#itertools.product <product>`_
......@@ -216,14 +217,14 @@ In python there is a function to do that in ``itertools module``: `https://docs.
python -i codons.py
>>> codons = all_codons()
:download:`codons_itertools.py <_static/code/codons_itertools.py>` .
Exercise
--------
From a list return a new list without any duplicate, regardless of the order of items.
From a list return a new list without any duplicate, regardless of the order of items.
For example: ::
>>> l = [5,2,3,2,2,3,5,1]
......@@ -264,7 +265,7 @@ If we plan to use ``uniqify`` with large list we should find a better algorithm.
In the specification we can read that uniqify can work *regardless the order of the resulting list*.
So we can use the specifycity of set ::
>>> list(set(l))
......@@ -273,18 +274,18 @@ Exercise
We need to compute the occurrence of all kmers of a given length present in a sequence.
Below we propose 2 algorithms.
Below we propose 2 algorithms.
pseudo code 1
"""""""""""""
| *function get_kmer_occurences(seq, kmer_len)*
| *all_kmers <- generate all possible kmer of kmer_len*
| *occurences <- 0*
| *occurences <- 0*
| *for each kmer in all_kmers*
| *count occurence of kmer*
| *store occurence*
pseudo code 2
"""""""""""""
......@@ -293,29 +294,29 @@ pseudo code 2
| *from i = 0 to sequence length - kmer_len*
| *kmer <- kmer startin at pos i im sequence*
| *increase by of occurence of kmer*
.. note::
Computer scientists typically measure an algorithm’s efficiency in terms of its worst-case running time,
which is the largest amount of time an algorithm can take given the most difficult input of a fixed size.
The advantage to considering the worst case running time is that we are guaranteed that our algorithm
Computer scientists typically measure an algorithm’s efficiency in terms of its worst-case running time,
which is the largest amount of time an algorithm can take given the most difficult input of a fixed size.
The advantage to considering the worst case running time is that we are guaranteed that our algorithm
will never behave worse than our worst-case estimate.
Big-O notation compactly describes the running time of an algorithm.
For example, if your algorithm for sorting an array of n numbers takes roughly n2 operations for the most difficult dataset,
then we say that the running time of your algorithm is O(n2). In reality, depending on your implementation, it may be use any number of operations,
such as 1.5n2, n2 + n + 2, or 0.5n2 + 1; all these algorithms are O(n2) because big-O notation only cares about the term that grows the fastest with
respect to the size of the input. This is because as n grows very large, the difference in behavior between two O(n2) functions,
like 999 · n2 and n2 + 3n + 9999999, is negligible when compared to the behavior of functions from different classes,
Big-O notation compactly describes the running time of an algorithm.
For example, if your algorithm for sorting an array of n numbers takes roughly n2 operations for the most difficult dataset,
then we say that the running time of your algorithm is O(n2). In reality, depending on your implementation, it may be use any number of operations,
such as 1.5n2, n2 + n + 2, or 0.5n2 + 1; all these algorithms are O(n2) because big-O notation only cares about the term that grows the fastest with
respect to the size of the input. This is because as n grows very large, the difference in behavior between two O(n2) functions,
like 999 · n2 and n2 + 3n + 9999999, is negligible when compared to the behavior of functions from different classes,
say O(n2) and O(n6). Of course, we would prefer an algorithm requiring 1/2 · n2 steps to an algorithm requiring 1000 · n2 steps.
When we write that the running time of an algorithm is O(n2), we technically mean that it does not grow faster than a function with a
leading term of c · n2, for some constant c. Formally, a function f(n) is Big-O of function g(n), or O(g(n)), when f(n) <= c · g(n) for some
When we write that the running time of an algorithm is O(n2), we technically mean that it does not grow faster than a function with a
leading term of c · n2, for some constant c. Formally, a function f(n) is Big-O of function g(n), or O(g(n)), when f(n) <= c · g(n) for some
constant c and sufficiently large n.
For more on Big-O notation, see A `http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/Beginner's <Guide to Big-O Notation>`_.
Compare the pseudocode of each of them and implement the fastest one. ::
......@@ -330,13 +331,13 @@ Compare the pseudocode of each of them and implement the fastest one. ::
acggcaacatggctggccagtgggctctgagaggagaaagtccagtggatgctcttggtctggttcgtgagcgcaacaca"""
In the first alogrithm.
In the first alogrithm.
| we first compute all kmers we generate 4\ :sup:`kmer length`
| then we count the occurence of each kmer in the sequence
| so for each kmer we read all the sequence so the algorith is in O( 4\ :sup:`kmer length` * ``sequence length``)
| so for each kmer we read all the sequence so the algorith is in O( 4\ :sup:`kmer length` * ``sequence length``)
| In the secon algorithm we read the sequence only once
| In the secon algorithm we read the sequence only once
| So the algorithm is in O(sequence length)
......@@ -365,9 +366,9 @@ Compute the 6 mers occurences of the sequence above, and print each 6mer and it'
aacttc .. 1
gcaact .. 1
aaatat .. 2
:download:`kmer.py <_static/code/kmer.py>` .
:download:`kmer.py <_static/code/kmer.py>`.
bonus:
......@@ -402,9 +403,9 @@ Print the kmers by ordered by occurences.
aggaaa .. 4
ttctga .. 3
ccagtg .. 3
:download:`kmer_2.py <_static/code/kmer_2.py>` .
:download:`kmer_2.py <_static/code/kmer_2.py>`.
Exercise
......@@ -434,10 +435,10 @@ pseudocode:
>>> seq = 'acggcaacatggctggccagtgggctctgagaggagaaagtccagtggatgctcttggtctggttcgtgagcgcaacaca'
>>> print rev_comp(seq)
tgtgttgcgctcacgaaccagaccaagagcatccactggactttctcctctcagagcccactggccagccatgttgccgt
:download:`rev_comp.py <_static/code/rev_comp.py>` .
:download:`rev_comp.py <_static/code/rev_comp.py>`.
other solution
""""""""""""""
......@@ -450,9 +451,9 @@ to change, the second string the corresponding characters in the new string.
Thus the two strings **must** have the same lenght. The correspondance between
the characters to change and their new values is made in funtion of thier position.
the first character of the first string will be replaced by the first character of the second string,
the second character of the first string will be replaced by the second character of the second string, on so on.
the second character of the first string will be replaced by the second character of the second string, on so on.
So we can write the reverse complement without loop.
.. literalinclude:: _static/code/rev_comp2.py
:linenos:
:language: python
......@@ -463,7 +464,7 @@ So we can write the reverse complement without loop.
>>> seq = 'acggcaacatggctggccagtgggctctgagaggagaaagtccagtggatgctcttggtctggttcgtgagcgcaacaca'
>>> print rev_comp(seq)
tgtgttgcgctcacgaaccagaccaagagcatccactggactttctcctctcagagcccactggccagccatgttgccgt
:download:`rev_comp2.py <_static/code/rev_comp2.py>` .
Exercise
......@@ -473,7 +474,7 @@ let the following enzymes collection:
We decide to implement enzymes as tuple with the following structure
("name", "comment", "sequence", "cut", "end")
::
ecor1 = ("EcoRI", "Ecoli restriction enzime I", "gaattc", 1, "sticky")
ecor5 = ("EcoRV", "Ecoli restriction enzime V", "gatatc", 3, "blunt")
......@@ -509,22 +510,22 @@ and the 2 dna fragments: ::
#. use the functions above to compute the enzymes which cut the dna_1
apply the same functions to compute the enzymes which cut the dna_2
compute the difference between the enzymes which cut the dna_1 and enzymes which cut the dna_2
.. literalinclude:: _static/code/enzyme_1.py
:linenos:
:language: python
::
from enzyme_1 import *
enzymes = [ecor1, ecor5, bamh1, hind3, taq1, not1, sau3a1, hae3, sma1]
dna_1 = one_line(dna_1)
dans_2 = one_line(dna_2)
enz_1 = enz_filter(enzymes, dna_1)
enz_2 = enz_filter(enzymes, dna_2)
enz_2 = enz_filter(enzymes, dna_2)
enz1_only = set(enz_1) - set(enz_2)
:download:`enzymes_1.py <_static/code/enzyme_1.py>` .
:download:`enzymes_1.py <_static/code/enzyme_1.py>`.
with this algorithm we find if an enzyme cut the dna but we cannot find all cuts in the dna for an enzyme. ::
......@@ -565,7 +566,7 @@ The code must be adapted as below
:linenos:
:language: python
:download:`enzymes_1_namedtuple.py <_static/code/enzyme_1_namedtuple.py>` .
:download:`enzymes_1_namedtuple.py <_static/code/enzyme_1_namedtuple.py>`.
Exercise
--------
......@@ -573,7 +574,7 @@ Exercise
given the following dict : ::
d = {1 : 'a', 2 : 'b', 3 : 'c' , 4 : 'd'}
We want obtain a new dict with the keys and the values inverted so we will obtain: ::
inverted_d {'a': 1, 'c': 3, 'b': 2, 'd': 4}
......@@ -583,14 +584,14 @@ solution ::
inverted_d = {}
for key in d.keys():
inverted_d[d[key]] = key
solution ::
inverted_d = {}
for key, value in d.items():
inverted_d[value] = key
solution ::
inverted_d = {v : k for k, v in d.items()}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment