title
stringlengths
12
150
question_id
int64
469
40.1M
question_score
int64
2
5.52k
question_date
stringdate
2008-08-02 15:11:16
2016-10-18 06:16:31
answer_id
int64
536
40.1M
answer_score
int64
7
8.38k
answer_date
stringdate
2008-08-02 18:49:07
2016-10-18 06:19:33
tags
listlengths
1
5
question_body_md
stringlengths
15
30.2k
answer_body_md
stringlengths
11
27.8k
How to feed a placeholder?
33,810,990
4
2015-11-19T17:52:29Z
33,812,296
11
2015-11-19T19:03:15Z
[ "python", "tensorflow" ]
I am trying to implement a simple feed forward network. However, I can't figure out how to feed a `Placeholder`. This example: ``` import tensorflow as tf num_input = 2 num_hidden = 3 num_output = 2 x = tf.placeholder("float", [num_input, 1]) W_hidden = tf.Variable(tf.zeros([num_hidden, num_input])) W_out = tf....
To feed a placeholder, you use the `feed_dict` argument to `Session.run()` (or `Tensor.eval()`). Let's say you have the following graph, with a placeholder: ``` x = tf.placeholder(tf.float32, shape=[2, 2]) y = tf.constant([[1.0, 1.0], [0.0, 1.0]]) z = tf.matmul(x, y) ``` If you want to evaluate `z`, you must feed a v...
Python creating tuple groups in list from another list
33,812,142
4
2015-11-19T18:54:03Z
33,812,257
8
2015-11-19T19:00:57Z
[ "python", "list", "tuples" ]
Let's say I have this data: ``` data = [1, 2, 3, -4, -5, 3, 2, 4, -2, 5, 6, -5, -1, 1] ``` I need it to be grouped in another list by tuples. One tuple consists of two lists. One for positive numbers, another for negative. And tuples should be created by checking what kind of number it is. Last negative number (I mea...
I'd use [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) to make a list of consecutive tuples containing positive/negative lists first, and then group into consecutive pairs. This can still be done in one pass through the list by taking advantage of generators: ``` from itertoo...
send_file() when called return text document instead of an image
33,818,466
4
2015-11-20T03:27:52Z
33,818,894
7
2015-11-20T04:16:26Z
[ "python", "web", "flask" ]
I want to send an image file from server side to client side. I am using flask framework. But the problem is whenever I call the route in which `send_file()` is, the response return is an File. When I click this file `gedit` opens it with nothing in that file. That means it must be text file written. I referred the f...
1. `resp` is a `requests.models.Response` object, not string nor bytes: ``` >>> import requests >>> todown = 'https://igcdn-photos-e-a.akamaihd.net//hphotos-ak-xaf1//t51.2885-15//e35//12093691_1082288621781484_1524190206_n.jpg' >>> resp = requests.get(todown) >>> resp <Response [200]> >>> type(res...
UnicodeDecodeError: ('utf-8' codec) while reading a csv file
33,819,557
7
2015-11-20T05:22:13Z
33,819,765
8
2015-11-20T05:41:32Z
[ "python", "pandas", "utf-8", "python-unicode" ]
what i am trying is reading a csv to make a dataframe---making changes in a column---again updating/reflecting changed value into same csv(to\_csv)- again trying to read that csv to make another dataframe...there i am getting an error ``` UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe7 in position 7: invalid ...
# Known encoding If you know the encoding of the file you want to read in, you can use ``` pd.read_csv('filename.txt', encoding='encoding') ``` These are the possible encodings: <https://docs.python.org/3/library/codecs.html#standard-encodings> # Unknown encoding If you do not know the encoding, you can try to use...
How to sort only few values inside a list in Python
33,822,603
9
2015-11-20T09:00:40Z
33,823,004
7
2015-11-20T09:20:52Z
[ "python", "list", "sorting" ]
Suppose ``` A = [9, 5, 34, 33, 32, 31, 300, 30, 3, 256] ``` I want to sort only a particular section in a list. For example, here I want to sort only `[300, 30, 3]` so that overall list becomes: ``` A = [9, 5, 34, 33, 32, 31, 3, 30, 300, 256] ``` Suppose `B = [300, 30, 400, 40, 500, 50, 600, 60]` then after sorting...
Since each expected sequence is contains the numbers which are a common coefficient of power of then, You can use a `scientific_notation` function which returns the common coefficient.Then you can categorize your numbers based on this function and concatenate them. ``` >>> from operator import itemgetter >>> from iter...
Why won't dynamically adding a `__call__` method to an instance work?
33,824,228
8
2015-11-20T10:22:12Z
33,824,320
8
2015-11-20T10:27:21Z
[ "python", "instance" ]
In both Python 2 and Python 3 the code: ``` class Foo(object): pass f = Foo() f.__call__ = lambda *args : args f(1, 2, 3) ``` returns as error `Foo object is not callable`. Why does that happen? PS: With old-style classes it works as expected. PPS: There behavior is intended (see accepted answer). As a work-a...
Double-underscore methods are always looked up on the class, never the instance. See [*Special method lookup for new-style classes*](https://docs.python.org/2/reference/datamodel.html#special-method-lookup-for-new-style-classes): > For new-style classes, implicit invocations of special methods are only guaranteed to w...
How to convert this list into a dictionary
33,824,334
4
2015-11-20T10:28:27Z
33,824,432
9
2015-11-20T10:33:52Z
[ "python", "list", "dictionary" ]
I have a list currently that looks like this ``` list = [['hate', '10'], ['would', '5'], ['hello', '10'], ['pigeon', '1'], ['adore', '10']] ``` I want to convert it to a dictionary like this ``` dict = {'hate': '10', 'would': '5', 'hello': '10', 'pigeon': '1', 'adore': '10'} ``` So basically the `list [i][0]` will...
Use the `dict` constructor: ``` In [1]: lst = [['hate', '10'], ['would', '5'], ['hello', '10'], ['pigeon', '1'], ['adore', '10']] In [2]: dict(lst) Out[2]: {'adore': '10', 'hate': '10', 'hello': '10', 'pigeon': '1', 'would': '5'} ``` Note that from your edit it seems you need the values to be integers rather than s...
Why is pos_tag() so painfully slow and can this be avoided?
33,829,160
5
2015-11-20T14:29:31Z
33,829,434
10
2015-11-20T14:43:38Z
[ "python", "nltk" ]
I want to be able to get POS-Tags of sentences one by one like in this manner: ``` def __remove_stop_words(self, tokenized_text, stop_words): sentences_pos = nltk.pos_tag(tokenized_text) filtered_words = [word for (word, pos) in sentences_pos if pos not in stop_words and word not in s...
For nltk version 3.1, inside [`nltk/tag/__init__.py`](https://github.com/nltk/nltk/blob/develop/nltk/tag/__init__.py#L110), `pos_tag` is defined like this: ``` from nltk.tag.perceptron import PerceptronTagger def pos_tag(tokens, tagset=None): tagger = PerceptronTagger() return _pos_tag(tokens, tagset, tagger) ...
passing bash array to python list
33,829,444
5
2015-11-20T14:44:09Z
33,829,676
9
2015-11-20T14:55:12Z
[ "python", "arrays", "linux", "bash" ]
I'm trying to pass an array from bash to python using the old getenv method however I keep getting this error: ``` ./crcFiles.sh: line 7: export: `0021': not a valid identifier Traceback (most recent call last): File "/shares/web/vm3618/optiload/prog/legalLitres.py", line 30, in <module> for i in mdcArray.split(...
When you `export` a variable from the shell, what you are really doing is adding it to the POSIX "environment" array that all child processes inherit. But the POSIX environment is a flat array of name=value strings; it cannot itself contain arrays. So Bash doesn't even attempt to put arrays there. It will let you `expo...
Printing the loss during TensorFlow training
33,833,818
8
2015-11-20T18:41:15Z
33,834,561
10
2015-11-20T19:26:32Z
[ "python", "tensorflow" ]
I am looking at the TensorFlow "MNIST For ML Beginners" tutorial, and I want to print out the training loss after every training step. My training loop currently looks like this: ``` for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys...
You can fetch the value of `cross_entropy` by adding it to the list of arguments to `sess.run(...)`. For example, your `for`-loop could be rewritten as follows: ``` for i in range(100): batch_xs, batch_ys = mnist.train.next_batch(100) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) _, loss_val = sess.run([t...
Type hints: solve circular dependency
33,837,918
5
2015-11-20T23:46:07Z
33,844,891
7
2015-11-21T15:06:56Z
[ "python", "type-hinting", "python-3.5" ]
The following produces `NameError: name 'Client' is not defined`. How can I solve it? ``` class Server(): def register_client(self, client: Client) pass class Client(): def __init__(self, server: Server): server.register_client(self) ```
You can use a [forward reference](http://legacy.python.org/dev/peps/pep-0484/#forward-references) by using a *string* name for the not-yet-defined `Client` class: ``` class Server(): def register_client(self, client: 'Client') pass ```
Is there any difference between Python list slicing [-1:] and [-1]?
33,841,023
4
2015-11-21T07:52:48Z
33,841,106
8
2015-11-21T08:01:23Z
[ "python", "list" ]
I have read a snippet of code like this: ``` s = self.buffer_file.readline() if s[-1:] == "\n": return s ``` And if I do this: ``` s = 'abc' In [78]: id(s[-1:]), id(s[-1]) Out[78]: (140419827715248, 140419827715248) In [79]: id(s[-1:]) is id(s[-1]) Out[79]: False In [80]: id(s[-1:]) == id(s[-1]) Out[80]: True ...
The difference is that the result of slicing a list is a list ``` x = [1, 2, 3] print(x[-1]) # --> 3 print(x[-1:]) # --> [3] ``` The second case just happens to be a list of one element, but it's still a list. Note however that Python doesn't have a `char` type distinct from the `str` type and this means that both...
Is there any difference between Python list slicing [-1:] and [-1]?
33,841,023
4
2015-11-21T07:52:48Z
33,841,242
7
2015-11-21T08:19:30Z
[ "python", "list" ]
I have read a snippet of code like this: ``` s = self.buffer_file.readline() if s[-1:] == "\n": return s ``` And if I do this: ``` s = 'abc' In [78]: id(s[-1:]), id(s[-1]) Out[78]: (140419827715248, 140419827715248) In [79]: id(s[-1:]) is id(s[-1]) Out[79]: False In [80]: id(s[-1:]) == id(s[-1]) Out[80]: True ...
The key practical benefit to `s[-1:]` rather than `s[-1]` is that the former will produce an empty iterable rather than stopping with a traceback. ``` >>> 'hi'[-1:] 'i' >>> 'hi'[-1] 'i' >>> ''[-1:] '' >>> ''[-1] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: string index out of ra...
check if a key exists in a bucket in s3 using boto3
33,842,944
12
2015-11-21T11:46:03Z
33,843,019
20
2015-11-21T11:53:54Z
[ "python", "amazon-s3", "boto3" ]
I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches. But that seems longer and an overkill. Boto3 official docs explicitly state how to do this. May be I am missing the obvious. Can anybody point me how I can achieve this.
Boto 2's `boto.s3.key.Key` object used to have an `exists` method that checked if the key existed on S3 by doing a HEAD request and looking at the the result, but it seems that that no longer exists. You have to do it yourself: ``` import boto3 import botocore s3 = boto3.resource('s3') exists = False try: s3.Obj...
check if a key exists in a bucket in s3 using boto3
33,842,944
12
2015-11-21T11:46:03Z
34,562,141
12
2016-01-02T02:53:50Z
[ "python", "amazon-s3", "boto3" ]
I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches. But that seems longer and an overkill. Boto3 official docs explicitly state how to do this. May be I am missing the obvious. Can anybody point me how I can achieve this.
I'm not a big fan of using exceptions for control flow. This is an alternative approach that works in boto3: ``` import boto3 bucket = s3.Bucket('my-bucket') key = 'dootdoot.jpg' objs = list(bucket.objects.filter(Prefix=key)) if len(objs) > 0 and objs[0].key == key: print("Exists!") else: print("Doesn't exist...
How to serialize groups of a user with Django-Rest-Framework
33,844,003
5
2015-11-21T13:39:13Z
33,844,179
8
2015-11-21T13:57:59Z
[ "python", "django", "rest", "django-rest-framework" ]
I'm trying to get users groups with Django REST framework, but only what I got is empty field named "groups". This is my UserSerializer: ``` class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('url', 'username', 'email', 'is_staff', 'groups') ``` any ideas ho...
You have to specify that it's a nested relationships: ``` class GroupSerializer(serializers.ModelSerializer): class Meta: model = Group fields = ('name',) class UserSerializer(serializers.ModelSerializer): groups = GroupSerializer(many=True) class Meta: model = User ...
how to set rmse cost function in tensorflow
33,846,069
3
2015-11-21T16:58:43Z
37,511,638
7
2016-05-29T15:23:36Z
[ "python", "tensorflow" ]
I have cost function in tensorflow. ``` activation = tf.add(tf.mul(X, W), b) cost = (tf.pow(Y-y_model, 2)) # use sqr error for cost function ``` update : i am trying out this example <https://github.com/nlintz/TensorFlow-Tutorials/blob/master/1_linear_regression.py> How can i change it to rmse cost function? Pleas...
``` tf.sqrt(tf.reduce_mean(tf.square(tf.sub(targets, outputs)))) ```
Why is the scapy installation failing on Mac?
33,849,901
4
2015-11-21T23:33:41Z
33,855,601
7
2015-11-22T14:00:06Z
[ "python", "osx", "python-3.x", "scapy" ]
When I try installing scapy on Mac, I get this error: ``` Collecting scapy Downloading scapy-2.3.1.zip (1.1MB) 100% |████████████████████████████████| 1.1MB 436kB/s Complete output from command python setup.py egg_info: Traceback (most re...
To install scapy for python3, you have to run `pip3 install scapy-python3`. Just `pip3 install scapy` will install old 2.x version, which is not python3 compatible.
Django Rest Framework Serializer Relations: How to get list of all child objects in parent's serializer?
33,853,255
5
2015-11-22T09:15:03Z
33,853,315
8
2015-11-22T09:25:03Z
[ "python", "django", "serialization", "django-rest-framework" ]
I'm new to DRF and have just started building an API. I have two models, a child model connected to a parent model with a foreign key. Here is the simplified version of the model I have: ``` class Parent(models.Model): name = models.CharField(max_length=50) class Child(models.Model): parent = models.ForeignK...
I think your problem is you forgot to add a **related\_name** for your Children model. I would have the models like this: ``` class Parent(models.Model): name = models.CharField(max_length=50) class Child(models.Model): parent = models.ForeignKey(Parent, related_name='children') # <--- Add related_name c...
Demystifying sharedctypes performance
33,853,543
11
2015-11-22T09:56:37Z
33,915,113
13
2015-11-25T11:16:39Z
[ "python" ]
In python it is possible to share ctypes objects between multiple processes. However I notice that allocating these objects seems to be extremely expensive. Consider following code: ``` from multiprocessing import sharedctypes as sct import ctypes as ct import numpy as np n = 100000 l = np.random.randint(0, 10, size...
# Sample Code I rewrote your sample code a little bit to look into this issue. Here's where I landed, I'll use it in my answer below: `so.py`: ``` from multiprocessing import sharedctypes as sct import ctypes as ct import numpy as np n = 100000 l = np.random.randint(0, 10, size=n) def sct_init(): sh = sct.Raw...
Generate random number outside of range in python
33,857,855
18
2015-11-22T17:30:18Z
33,859,020
8
2015-11-22T19:11:51Z
[ "python", "random", "range" ]
I'm currently working on a pygame game and I need to place objects randomly on the screen, except they cannot be within a designated rectangle. Is there an easy way to do this rather than continuously generating a random pair of coordinates until it's outside of the rectangle? Here's a rough example of what the screen...
This offers an O(1) approach in terms of both time and memory. **Rationale** The accepted answer along with some other answers seem to hinge on the necessity to generate lists of *all* possible coordinates, or recalculate until there is an acceptable solution. Both approaches take more time and memory than necessary....
Generate random number outside of range in python
33,857,855
18
2015-11-22T17:30:18Z
33,861,819
10
2015-11-23T00:02:04Z
[ "python", "random", "range" ]
I'm currently working on a pygame game and I need to place objects randomly on the screen, except they cannot be within a designated rectangle. Is there an easy way to do this rather than continuously generating a random pair of coordinates until it's outside of the rectangle? Here's a rough example of what the screen...
1. Partition the box into a set of sub-boxes. 2. Among the valid sub-boxes, choose which one to place your point in with probability proportional to their areas 3. Pick a random point uniformly at random from within the chosen sub-box. [![random sub-box](http://i.stack.imgur.com/EIZ68.png)](http://i.stack.imgur.com/EI...
install psycopg2 on mac osx 10.9.5 [pg_config] [pip]
33,866,695
8
2015-11-23T08:42:29Z
33,866,865
13
2015-11-23T08:51:13Z
[ "python", "osx", "postgresql", "psycopg2" ]
I'm trying to install `psycopg2` on my macbook. I still get the same error. I found a lot of same questions on stackoverflow but no answer seems to work. I think it is outdated. I'm using: `Mac osx 10.9.5 Python 3.4.3` My error code is: > Running setup.py egg\_info for package psycopg2 Error: pg\_config > executable...
You don't seem to have postgres installed, check how to install postgresql in your system, one of the way is `brew install postgresql` (if you use homebrew- recommended) or download the postgres app from postgresapp.com, pg\_config should come with postgres and psycopg2 is trying to find it.
install psycopg2 on mac osx 10.9.5 [pg_config] [pip]
33,866,695
8
2015-11-23T08:42:29Z
35,817,509
8
2016-03-05T17:44:24Z
[ "python", "osx", "postgresql", "psycopg2" ]
I'm trying to install `psycopg2` on my macbook. I still get the same error. I found a lot of same questions on stackoverflow but no answer seems to work. I think it is outdated. I'm using: `Mac osx 10.9.5 Python 3.4.3` My error code is: > Running setup.py egg\_info for package psycopg2 Error: pg\_config > executable...
To install `psycop2` you need have installed server before( I have installed [PostgesApp](http://postgresapp.com)) Run manually command including the path of `pg_config` program in `PATH` env variable, in my case: ``` export PATH=$PATH:/Applications/Postgres.app/Contents/Versions/9.4/bin/ ``` and then run ``` pip3 ...
Merging dictionary keys if values the same
33,871,034
5
2015-11-23T12:23:50Z
33,871,138
7
2015-11-23T12:29:14Z
[ "python" ]
So this is a weird problem that I suspect is really simple to solve. I'm building a lyrics webapp for remote players in my house. It currently generates a dictionary of players with the song they're playing. Eg: ``` { 'bathroom': <Song: Blur - Song 2>, 'bedroom1': <Song: Blur - Song 2>, 'kitchen': <Song: M...
``` from itertools import groupby x = { 'bathroom': 'a', 'bedroom1': 'a', 'kitchen': 'b' } { ','.join(i[0] for i in v): k for k,v in groupby(sorted(x.iteritems(), key=lambda p: p[1]), lambda p: p[1]) } ```
String substitution performance in python
33,872,176
4
2015-11-23T13:22:50Z
33,872,252
7
2015-11-23T13:26:19Z
[ "python", "string", "performance", "list" ]
I have a list of ~50,000 strings (titles), and a list of ~150 words to remove from these titles, if they are found. My code so far is below. The final output should be the list of 50,000 strings, with all instances of the 150 words removed. I would like to know the most efficient (performance wise) way of doing this. M...
A lot of the performance overhead of regular expressions comes from compiling the regular expressions. You should move the compilation of the regular expression out of the loop. This should give you a considerable improvement: ``` pattern1 = re.compile('[^0-9a-zA-Z]+') pattern2 = re.compile('\s+') for k in range(len(...
List sorting in Python
33,875,422
6
2015-11-23T16:05:57Z
33,875,631
10
2015-11-23T16:16:47Z
[ "python", "list", "sorting", "python-3.x" ]
I have N-numbers lists, but for instance 3 lists: ``` a = [1,1,1,1] b = [2,2,2,2] c = [3,3,3,3] ``` And I want to get output like this: ``` f = [1,2,3] g = [1,2,3] ``` And etc. The issue is a solution has to be independent of numbers of list and items inside of a list. For example: ``` a = [1,1] b = [2] c = [3,3...
You can use [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) ``` >>> from itertools import zip_longest >>> a = [1,1] >>> b = [2] >>> c = [3,3,3] >>> f,g,h=[[e for e in li if e is not None] for li in zip_longest(a,b,c)] >>> f [1, 2, 3] >>> g [1, 3] >>> h [3] ``` If `None` is a pot...
Python modulo result differs from wolfram alpha?
33,879,279
8
2015-11-23T19:41:31Z
33,879,472
11
2015-11-23T19:52:44Z
[ "python", "cryptography", "rsa", "modulo", "wolframalpha" ]
When I run my python 3 program: ``` exp = 211 p = 199 q = 337 d = (exp ** (-1)) % ((p - 1)*(q - 1)) ``` results in 211^(-1). But when I run the [calculation in wolfram alpha](http://www.wolframalpha.com/input/?i=%28%20211%5E%28-1%29%29%20mod%20%28%28199%20-%201%29*%28337%20-%201%29%29) I get the result I was expect...
Wolfram Alpha is computing the [*modular inverse*](https://en.wikipedia.org/wiki/Modular_multiplicative_inverse). That is, it's finding the integer `x` such that ``` exp*x == 1 mod (p - 1)*(q - 1) ``` This is not the same as the modulo operator `%`. Here, Python is simply calculating the remainder when `1/exp` is div...
Get spotify currently playing track
33,883,360
2
2015-11-24T00:26:39Z
33,923,095
7
2015-11-25T17:40:46Z
[ "python", "linux", "spotify" ]
**EDIT : Let's try to clarify all this.** I'm writing a python script, and I want it to tell me the song that Spotify is currently playing. I've tried looking for libraries that could help me but didn't find any that are still maintained and working. I've also looked through Spotify's web API, but it does not provide...
The Spotify client on Linux implements a D-Bus interface called MPRIS - Media Player Remote Interfacing Specification. <http://specifications.freedesktop.org/mpris-spec/latest/index.html> You could access the title (and other metadata) from python like this: ``` import dbus session_bus = dbus.SessionBus() spotify_bu...
Creating lists with loops in Python
33,888,298
3
2015-11-24T08:04:39Z
33,888,415
7
2015-11-24T08:12:33Z
[ "python", "list", "loops" ]
I'm trying to create a sequence of lists with different variable names that correspond to different lines of a text file. My current code requires me to hard-code the number of lines in the file: ``` with open('ProjectEuler11Data.txt') as numbers: data = numbers.readlines() for line in data: if line ==...
``` with open('ProjectEuler11Data.txt') as numbers: data = numbers.readlines() lines = [line.split() for line in data] ``` I am not sure why you need different variable names for each line when you can have an array with all lines at the end. You can now simply access the individual lines by line[0], line[1] and s...
Difference between io.open vs open in python
33,891,373
2
2015-11-24T10:38:48Z
33,891,608
7
2015-11-24T10:49:58Z
[ "python", "file", "python-3.x", "io", "python-2.x" ]
In the past, there's `codecs` which got replaced by `io`. Although it seems like it's more advisable to use `io.open`, most introductory python classes still teaches `open`. There's a question with [Difference between open and codecs.open in Python](http://stackoverflow.com/questions/5250744/difference-between-open-an...
Situation in Python3 according to the docs: > `io.open(file, *[options]*)` > > This is an alias for the builtin open() function. and > **While the builtin open() and the associated io module are the > recommended approach** for working with encoded text files, this module > *[i.e. codecs]* provides additional utilit...
Correct way of "Absolute Import" in Python 2.7
33,893,610
24
2015-11-24T12:21:06Z
33,949,448
13
2015-11-27T02:24:13Z
[ "python", "python-2.7", "python-import" ]
* Python 2.7.10 * In virtualenv * Enable `from __future__ import absolute_import` in each module The directory tree looks like: ``` Project/ prjt/ __init__.py pkg1/ __init__.py module1.py tests/ __init__.py test_module1.py ...
### How python find module *python will find module from `sys.path`, and the first entry `sys.path[0]` is '' means, python will find module from the current working directory* ``` import sys print sys.path ``` and python find third-party module from `site-packages` so to absolute import, you can **append your pack...
Precedence of "in" in Python
33,897,137
24
2015-11-24T15:08:55Z
33,897,287
18
2015-11-24T15:15:27Z
[ "python", "syntax", "operator-precedence" ]
This is a bit of a (very basic) language-lawyer kind of question. I understand what the code does, and why, so please no elementary explanations. In an expression, `in` has [higher precedence](https://docs.python.org/3.5/reference/expressions.html?highlight=precedence#operator-precedence) than `and`. So if I write ``...
The word `in` in a `for` loop is part of a *statement*. Statements have no precedence. `in` the operator, on the other hand, is always going to be part of an expression. Precedence governs the relative priority between operators in expressions. In statements then, look for the `expression` parts in their documented g...
Precedence of "in" in Python
33,897,137
24
2015-11-24T15:08:55Z
33,897,295
28
2015-11-24T15:15:48Z
[ "python", "syntax", "operator-precedence" ]
This is a bit of a (very basic) language-lawyer kind of question. I understand what the code does, and why, so please no elementary explanations. In an expression, `in` has [higher precedence](https://docs.python.org/3.5/reference/expressions.html?highlight=precedence#operator-precedence) than `and`. So if I write ``...
In the context of a `for` statement, the `in` is just part of the grammar that makes up that compound statement, and so it is distinct from the operator `in`. The Python grammar specification defines a `for` statement [like this](https://docs.python.org/3/reference/compound_stmts.html#for): ``` for_stmt ::= "for" tar...
How to retrieve pending and executing Celery tasks with their arguments?
33,897,388
9
2015-11-24T15:20:17Z
33,963,077
7
2015-11-27T18:31:34Z
[ "python", "celery" ]
In Celery docs, there is the [example](http://docs.celeryproject.org/en/latest/userguide/workers.html#dump-of-currently-executing-tasks) of inspecting executing tasks: > You can get a list of active tasks using active(): > > ``` > >>> i.active() > [{'worker1.example.com': > [{'name': 'tasks.sleeptask', > 'id...
OK, I'm gonna drop this in as an answer. Hope this addresses your concern. Celery offers up a string for the args. To handle it, and get a list: ``` args = '(5,6,7,8)' # from celery status as_list = list(eval(args)) ``` Of course, `eval()` is a little dangerous, so you may want to use literal eval: ``` import ast ...
Count number of non-NaN entries in each column of Spark dataframe with Pyspark
33,900,726
16
2015-11-24T18:03:54Z
33,901,312
45
2015-11-24T18:38:00Z
[ "python", "apache-spark", "apache-spark-sql", "pyspark" ]
I have a very large dataset that is loaded in Hive. It consists of about 1.9 million rows and 1450 columns. I need to determine the "coverage" of each of the columns, meaning, the fraction of rows that have non-NaN values for each column. Here is my code: ``` from pyspark import SparkContext from pyspark.sql import H...
Let's start with a dummy data: ``` from pyspark.sql import Row row = Row("x", "y", "z") df = sc.parallelize([ row(0, 1, 2), row(None, 3, 4), row(None, None, 5)]).toDF() ## +----+----+---+ ## | x| y| z| ## +----+----+---+ ## | 0| 1| 2| ## |null| 3| 4| ## |null|null| 5| ## +----+----+---+ ``` All yo...
Can you break a while loop from outside the loop?
33,906,813
2
2015-11-25T01:05:48Z
33,906,858
7
2015-11-25T01:11:05Z
[ "python", "python-3.x", "while-loop", "boolean-expression" ]
Can you break a while loop from outside the loop? Here's a (very simple) example of what I'm trying to do: I want to ask for continuous inside a While loop, but when the input is 'exit', I want the while loop to break! ``` active = True def inputHandler(value): if value == 'exit': active = False while ac...
In your case, in `inputHandler`, you are creating a new variable called `active` and storing `False` in it. This will not affect the module level `active`. To fix this, you need to explicitly say that `active` is not a new variable, but the one declared at the top of the module, with the `global` keyword, like this `...
Flatten a bunch of key/value dictionaries into a single dictionary?
33,908,636
2
2015-11-25T04:46:39Z
33,908,661
9
2015-11-25T04:49:17Z
[ "python" ]
I want to convert this: `[{u'Key': 'color', u'Value': 'red'}, {u'Key': 'size', u'Value': 'large'}]` into this: `{'color': 'red', 'size': 'large'}`. Anyone have any recommendations? I'm been playing with list comprehensions, lambda functions, and `zip()` for over an hour and feel like I'm missing an obvious solution. T...
You can use [dictionary comprehension](https://stackoverflow.com/documentation/python/196/comprehensions/738/dictionary-comprehensions#t=201607261143021995509) and try something like this: ### Python-2.7 or Python-3.x ``` >>> a = [{u'Key': 'color', u'Value': 'red'}, {u'Key': 'size', u'Value': 'large'}] >>> b = {i['Ke...
Why expressing a regular expression containing '\' work without it being a raw string.
33,915,134
5
2015-11-25T11:17:45Z
33,915,210
8
2015-11-25T11:21:56Z
[ "python", "regex", "python-3.4" ]
Please refer to this Regular Expression HOWTO for python3 <https://docs.python.org/3/howto/regex.html#performing-matches> ``` >>> p = re.compile('\d+') >>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping') ['12', '11', '10'] ``` I have read that for regular expression containing `'\'`, the raw ...
It happens to work because `'\d'` doesn't correspond to a special character like `'\n'` or `'\t'` do. Sometimes a raw string turns out the same as the regular string version. Generally, though, raw strings will ensure that you don't get any surprises in your expression.
Rewrite of zip function won't work
33,918,006
4
2015-11-25T13:40:46Z
33,918,092
7
2015-11-25T13:45:00Z
[ "python" ]
I'm rewriting the zip function as a practice of my Python skills. The aim is to write it using list comprehension, although I am not 100% sure I am fully comfortable with it hence I'm doing this. Here is what I have so far: ``` def zip(l1, l2): return [(l1[0], l2[0])] + zip(l1[1:], l2[1:]) z = zip(['a', 'b', 'c'...
Your `zip` function implementation is recursive. At some point `l1[1:]` or `l2[1:]` will become empty, and attempts to access the first element will fail with `IndexError`. Check if both `l1` and `l2` are nonempty and return empty list if they are: ``` def zip(l1, l2): if not (l1 and l2): return [] re...
What type signature do generators have in Python?
33,918,325
7
2015-11-25T13:54:59Z
33,918,389
8
2015-11-25T13:57:36Z
[ "python", "python-3.x", "annotations", "type-hinting" ]
Given that the new Python 3.5 allows type hinting with type signatures I want to use the new feature, but I don't know how to fully annotate a function with the following structure: ``` def yieldMoreIfA(text:str): if text == "A": yield text yield text return else: yield text ...
There is a [`Generator[yield_type, send_type, return_type]` type](https://docs.python.org/3/library/typing.html#typing.Generator): ``` from typing import Generator def yieldMoreIfA(text: str) -> Generator[str, None, None]: if text == "A": yield text yield text return else: yiel...
How to set adaptive learning rate for GradientDescentOptimizer?
33,919,948
18
2015-11-25T15:08:58Z
33,922,859
42
2015-11-25T17:28:00Z
[ "python", "tensorflow" ]
I am using TensorFlow to train a neural network. This is how I am initializing the `GradientDescentOptimizer`: ``` init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) mse = tf.reduce_mean(tf.square(out - out_)) train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse) ``` The thing...
First of all, `tf.train.GradientDescentOptimizer` is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the [`tf.train.AdagradOptimizer`](http://www.tensorflow.org/api_docs/python/train.html#AdagradOptimizer) and the [`tf.train....
How to set adaptive learning rate for GradientDescentOptimizer?
33,919,948
18
2015-11-25T15:08:58Z
33,931,754
27
2015-11-26T06:14:54Z
[ "python", "tensorflow" ]
I am using TensorFlow to train a neural network. This is how I am initializing the `GradientDescentOptimizer`: ``` init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) mse = tf.reduce_mean(tf.square(out - out_)) train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse) ``` The thing...
Tensorflow provides an op to automatically apply an exponential decay to a learning rate tensor: [`tf.train.exponential_decay`](http://www.tensorflow.org/api_docs/python/train.html#exponential_decay). For an example of it in use, see [this line in the MNIST convolutional model example](https://github.com/tensorflow/ten...
Why does TensorFlow return [[nan nan]] instead of probabilities from a CSV file?
33,922,937
6
2015-11-25T17:32:07Z
33,929,658
9
2015-11-26T02:28:00Z
[ "python", "csv", "tensorflow" ]
Here is the code that I am using. I'm trying to get a 1, 0, or hopefully a probability in result to a real test set. When I just split up the training set and run it on the training set I get a ~93% accuracy rate, but when I train the program and run it on the actual test set (the one without the 1's and 0's filling in...
I don't know the direct answer, but I know how I'd approach debugging it: [`tf.Print`](http://www.tensorflow.org/api_docs/python/control_flow_ops.html#Print). It's an op that prints the value as tensorflow is executing, and returns the tensor for further computation, so you can just sprinkle them inline in your model. ...
KeyError: 'data' with Python Instagram API client
33,924,581
7
2015-11-25T19:08:02Z
35,955,196
13
2016-03-12T07:58:00Z
[ "python", "instagram", "instagram-api", "keyerror" ]
I'm using this client [`python-instagram`](https://github.com/Instagram/python-instagram) with `Python 3.4.3` on `MacOS`. My steps: * Registered a new client on `instagram`, received client\_id and client\_secret * Pip install python-instagram * Copy sample\_app.py to my mac I followed the instructions on [`Sample a...
There is an open [`Github issue`](https://github.com/Instagram/python-instagram/issues/202) for this bug, a [`fix`](https://github.com/shackra/python-instagram/commit/c7af85fa867bf33a2370bc051c45db07f656e0da) was sent, but it's not merged yet. Add the one line fix to `models.py` on your installed package. Open with s...
How are these two functions the same?
33,928,553
3
2015-11-26T00:03:18Z
33,928,724
7
2015-11-26T00:23:09Z
[ "python" ]
Here's the code that Zed Shaw provides in Learning Python the Hard Way: ``` ten_things = "Apples Oranges Crows Telephone Light Sugar" print "Wait there's not 10 things in that list, let's fix that." stuff = ten_things.split(' ') more_stuff = ["Day", "Night", "Song", "Frisbee", "Corn", "Banana", "Girl", "Boy"] while...
To be precise, `''.join(things)` and `join('',things)` are not necessarily the same. However, `''.join(things)` and `str.join('',things)` *are* the same. The explanation requires some knowledge of how classes work in Python. I'll be glossing over or ignoring a lot of details that are not totally relevant to this discus...
Fast alternative to run a numpy based function over all the rows in Pandas DataFrame
33,931,933
10
2015-11-26T06:30:34Z
33,932,329
12
2015-11-26T06:57:40Z
[ "python", "numpy", "pandas", "cython" ]
I have a Pandas data frame created the following way: ``` import pandas as pd def create(n): df = pd.DataFrame({ 'gene':["foo", "bar", "qux", "woz"], 'cell1':[433.96,735.62,483.42,10.33], ...
A faster way is to implement a vectorized version of the function, which operates on a two dimensional ndarray directly. This is very doable since many functions in numpy can operate on two dimensional ndarray, controlled using the `axis` parameter. A possible implementation: ``` def sparseness2(xs): nr = np.sqrt(...
Fast alternative to run a numpy based function over all the rows in Pandas DataFrame
33,931,933
10
2015-11-26T06:30:34Z
33,932,750
8
2015-11-26T07:24:53Z
[ "python", "numpy", "pandas", "cython" ]
I have a Pandas data frame created the following way: ``` import pandas as pd def create(n): df = pd.DataFrame({ 'gene':["foo", "bar", "qux", "woz"], 'cell1':[433.96,735.62,483.42,10.33], ...
Here's one vectorized approach using [`np.einsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html) to perform all those operations in one go across the entire dataframe. Now, this `np.einsum` is supposedly pretty efficient for such multiplication and summing purposes. In our case, we can use it t...
String indexing - Why S[0][0] works and S[1][1] fails?
33,932,508
5
2015-11-26T07:09:18Z
33,932,551
10
2015-11-26T07:11:55Z
[ "python", "python-2.7", "python-3.x" ]
Suppose I create a string: ``` >>> S = "spam" ``` Now I index it as follows: ``` >>> S[0][0][0][0][0] ``` I get output as: ``` >>> 's' ``` But when i index it as: ``` >>> S[1][1][1][1][1] ``` I get output as: ``` Traceback (most recent call last): File "<pyshell#125>", line 1, in <module> L[1][1][1][1][1] Inde...
The answer is that `S[0]` gives you a string of length 1, which thus necessarily has a character at index 0. `S[1]` also gives you a string of length 1, but it necessarily does not have a character at index 1. See below: ``` >>> S = "spam" >>> S[0] 's' >>> S[0][0] 's' >>> S[1] 'p' >>> S[1][0] 'p' >>> S[1][1] Traceback...
String indexing - Why S[0][0] works and S[1][1] fails?
33,932,508
5
2015-11-26T07:09:18Z
33,932,561
8
2015-11-26T07:12:19Z
[ "python", "python-2.7", "python-3.x" ]
Suppose I create a string: ``` >>> S = "spam" ``` Now I index it as follows: ``` >>> S[0][0][0][0][0] ``` I get output as: ``` >>> 's' ``` But when i index it as: ``` >>> S[1][1][1][1][1] ``` I get output as: ``` Traceback (most recent call last): File "<pyshell#125>", line 1, in <module> L[1][1][1][1][1] Inde...
The first index (`[0]`) of any string is its first character. Since this results in a one-character string, the first index of that string is the first character, which is itself. You can do `[0]` as much as you want and stay with the same character. The second index (`[1]`), however, only exists for a string with at ...
What's the purpose of tf.app.flags in TensorFlow?
33,932,901
16
2015-11-26T07:34:17Z
33,938,519
22
2015-11-26T12:17:25Z
[ "python", "tensorflow" ]
I am reading some example codes in Tensorflow, I found following code ``` flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.') flags.DEFINE_integer('max_steps', 2000, 'Number of steps to run trainer.') flags.DEFINE_integer('hidden1', 128, 'Number of units in hidd...
The `tf.app.flags` module is presently a thin wrapper around python-gflags, so the [documentation for that project](https://github.com/gflags/python-gflags) is the best resource for how to use it [`argparse`](https://docs.python.org/2.7/library/argparse.html), which implements a subset of the functionality in [`python-...
Are there really only 4 Matplotlib Line Styles?
33,936,134
4
2015-11-26T10:25:43Z
33,936,680
7
2015-11-26T10:49:51Z
[ "python", "matplotlib", "plot" ]
I've been looking for new line styles in matplotlib, and the only line styles available are ["-", "--", "-.", ":",]. (The style options ['', ' ', 'None',] don't count because they just hide the lines.) Are there really only 4 line styles in Matplotlib pyplot? Are there any extensions that add further line styles? Is t...
You can use the `dashes` kwarg to set custom dash styles. From the [docs](http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D.set_dashes): > Set the dash sequence, sequence of dashes with on off ink in points. If seq is empty or if seq = (None, None), the linestyle will be set to solid. Here's some exam...
python bokeh offset with rect plotting
33,936,852
3
2015-11-26T10:57:33Z
34,213,135
7
2015-12-10T22:46:34Z
[ "python", "matrix", "bokeh", "glyph" ]
I have a problem while plotting a matrix with python bokeh and glyphs. I'm a newbie in Bokeh and just adapted a code I found on the web. Everything seems to be ok but there is an offset when I launch the function. [offset](http://i.stack.imgur.com/obfeE.png) And the thing I'd like to have is : [no offset](http://i...
I had the same question to. The issue is probably duplicates in your `x_range` and `y_range` - I got help via the mailing list: cf. <https://groups.google.com/a/continuum.io/forum/#!msg/bokeh/rvFcJV5_WQ8/jlm13N5qCAAJ> and [Issues with Correlation graphs in Bokeh](http://stackoverflow.com/questions/24179776/issues-with-...
How to specify multiple return types using type-hints
33,945,261
11
2015-11-26T18:45:00Z
33,945,518
18
2015-11-26T19:05:37Z
[ "python", "python-3.x", "return-type", "type-hinting", "python-3.5" ]
I have a function in python that can either return a `bool` or a `list`. Is there a way to specify the return types using type hints. For example, Is this the correct way to do it? ``` def foo(id) -> list or bool: ... ```
From the [documentation](https://docs.python.org/3/library/typing.html#typing.Union) > class `typing.Union` > > Union type; ***Union[X, Y] means either X or Y.*** Hence the proper way to represent more than one return data type is ``` def foo(client_id: str) -> Union[list,bool] ``` --- But do note that typing is n...
Reduce function doesn't handle an empty list
33,945,882
3
2015-11-26T19:34:13Z
33,945,933
8
2015-11-26T19:39:00Z
[ "python", "list", "lambda", "reduce" ]
I previously created a recursive function to find the product of a list. Now I've created the same function, but using the `reduce` function and `lamdba`. When I run this code, I get the correct answer. ``` items = [1, 2, 3, 4, 10] print(reduce(lambda x, y: x*y, items)) ``` However, when I give an empty list, an err...
As it is written in the [documentation](https://docs.python.org/2/library/functions.html#reduce): > If the optional initializer is present, it is placed before the items of the iterable in the calculation, and serves as a default when the iterable is empty. If initializer is not given and iterable contains only one it...
Get most significant digit in python
33,947,632
3
2015-11-26T22:06:18Z
33,947,673
12
2015-11-26T22:10:20Z
[ "python", "list", "integer", "significant-digits" ]
Say I have list `[34523, 55, 65, 2]` What is the most efficient way to get `[3,5,6,2]` which are the most significant digits. If possible without changing changing each to `str()`?
Assuming you're only dealing with positive numbers, you can divide each number by the largest power of 10 smaller than the number, and then take the floor of the result. ``` >>> from math import log10, floor >>> lst = [34523, 55, 65, 2] >>> [floor(x / (10**floor(log10(x)))) for x in lst] [3, 5, 6, 2] ``` If you're us...
Random Forest is overfitting
33,948,946
4
2015-11-27T00:55:00Z
33,949,738
7
2015-11-27T03:10:33Z
[ "python", "machine-learning", "scikit-learn", "random-forest" ]
I'm using scikit-learn with a stratified CV to compare some classifiers. I'm computing: accuracy, recall, auc. I used for the parameter optimization GridSearchCV with a 5 CV. ``` RandomForestClassifier(warm_start= True, min_samples_leaf= 1, n_estimators= 800, min_samples_split= 5,max_features= 'log2', max_depth= 400,...
Herbert, if your aim is to compare different learning algorithms, I recommend you to use nested cross-validation. (I refer to learning algorithm as different algorithms such as logistic regression, decision trees, and other discriminative models that learn the hypothesis or model -- the final classifier -- from your t...
How could I use Batch Normalization in TensorFlow?
33,949,786
40
2015-11-27T03:17:52Z
33,950,177
35
2015-11-27T04:16:11Z
[ "python", "tensorflow" ]
I would like to use Batch Normalization in TensorFlow, since I found it in the source code [`core/ops/nn_ops.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc). However, I did not find it documented on tensorflow.org. BN has different semantics in MLP and CNN, so I am not sure wha...
**Update July 2016** The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either [contrib/layers](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/layers.py), [tflearn](http://tflearn.org/layers/normalization/), or [slim]...
How could I use Batch Normalization in TensorFlow?
33,949,786
40
2015-11-27T03:17:52Z
34,634,291
23
2016-01-06T13:26:41Z
[ "python", "tensorflow" ]
I would like to use Batch Normalization in TensorFlow, since I found it in the source code [`core/ops/nn_ops.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc). However, I did not find it documented on tensorflow.org. BN has different semantics in MLP and CNN, so I am not sure wha...
The following works fine for me, it does not require invoking EMA-apply outside. ``` import numpy as np import tensorflow as tf from tensorflow.python import control_flow_ops def batch_norm(x, n_out, phase_train, scope='bn'): """ Batch normalization on convolutional maps. Args: x: Tensor...
Comparison to `None` will result in an elementwise object
33,954,216
5
2015-11-27T09:29:37Z
33,954,311
7
2015-11-27T09:34:17Z
[ "python", "numpy" ]
Apparantly it will (in the 'future') not be possible anymore to use the following: ``` import numpy as np np.array([0,1,2]) == None > False > FutureWarning: comparison to `None` will result in an elementwise object comparison in the future. ``` This also breaks the lazy loading pattern for numpy arrays: ``` import n...
You are looking for `is`: ``` if a is None: a = something else ``` The problem is that, by using the `==` operator, if the input element `a` is a numpy array, numpy will try to perform an element wise comparison and tell you that you cannot compare it. For `a` a numpy array, `a == None` gives error, `np.all(a ==...
Progress bar while uploading a file to dropbox
33,958,600
9
2015-11-27T13:36:11Z
33,985,193
10
2015-11-29T16:40:15Z
[ "python", "python-2.7", "progress-bar", "dropbox", "dropbox-api" ]
``` import dropbox client = dropbox.client.DropboxClient('<token>') f = open('/ssd-scratch/abhishekb/try/1.mat', 'rb') response = client.put_file('/data/1.mat', f) ``` I want to upload a big file to dropbox. How can I check the progress? [[Docs]](https://www.dropbox.com/developers-v1/core/docs/python#ChunkedUploader) ...
`upload_chunked`, as [the documentation](https://www.dropbox.com/developers-v1/core/docs/python#ChunkedUploader.upload_chunked) notes: > Uploads data from this `ChunkedUploader`'s `file_obj` in chunks, until an > error occurs. Throws an exception when an error occurs, and can be > called again to resume the upload. S...
Can Pickle handle files larger than the RAM installed on my machine?
33,965,021
15
2015-11-27T21:31:26Z
33,965,199
9
2015-11-27T21:50:55Z
[ "python", "python-3.x", "pickle", "textblob" ]
I'm using pickle for saving on disk my NLP classifier built with the TextBlob library. I'm using pickle after a lot of searches related to [this question](http://stackoverflow.com/questions/33883976/python-textblob-and-text-classification?). At the moment I'm working locally and I have no problem loading the pickle fi...
Unfortunately this is difficult to accurately answer without testing it on your machine. Here are some initial thoughts: 1. There is no inherent size limit that the Pickle module enforces, but you're pushing the boundaries of its intended use. It's not designed for individual large objects. However, you since you're ...
Tensorflow error using my own data
33,974,231
5
2015-11-28T17:27:16Z
33,974,372
9
2015-11-28T17:40:05Z
[ "python", "python-2.7", "tensorflow" ]
I've been playing with the Tensorflow library doing the tutorials. Now I wanted to play with my own data, but I fail horribly. This is perhaps a noob question but I can't figure it out. I'm using this example: <https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3%20-%20Neural%20Networks/convolut...
This error arises because the shape of the data that you're trying to feed (104 x 96 x 96 x 1) does not match the shape of the input placeholder (`batch_size` x 9216, where `batch_size` may be variable). To make it work, add the following line before running a training step: ``` batch_xs = np.reshape(batch_xs, (-1, 9...
Generate a random sample of points distributed on the surface of a unit sphere
33,976,911
4
2015-11-28T22:01:36Z
33,977,530
9
2015-11-28T23:11:15Z
[ "python", "numpy", "geometry", "random-sample", "uniform-distribution" ]
I am trying to generate random points on the surface of the sphere using numpy. I have reviewed the post that explains uniform distribution [here](http://stackoverflow.com/questions/5408276/python-uniform-spherical-distribution). However, need ideas on how to generate the points only on the surface of the sphere. I hav...
Based on [this approach](http://mathworld.wolfram.com/SpherePointPicking.html), you can simply generate a vector consisting of independent samples from three standard normal distributions, then normalize the vector such that its magnitude is 1: ``` import numpy as np def sample_spherical(npoints, ndim=3): vec = n...
What's the difference between loop.create_task, asyncio.async/ensure_future and Task?
33,980,086
18
2015-11-29T06:30:45Z
33,980,293
12
2015-11-29T07:02:18Z
[ "python", "python-3.x", "coroutine", "python-asyncio" ]
I'm a little bit confused by some `asyncio` functions. I see there is [`BaseEventLoop.create_task(coro)`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.create_task) function to schedule a co-routine. The documentation for `create_task` says its a new function and for compatibility we sh...
As you've noticed, they all do the same thing. `asyncio.async` had to be replaced with `asyncio.ensure_future` because in Python >= 3.5, `async` has been made a keyword[[1]](https://www.python.org/dev/peps/pep-0492/#backwards-compatibility). `create_task`'s raison d'etre[[2]](https://docs.python.org/3/library/asyncio...
Theano config directly in script
33,988,334
4
2015-11-29T21:25:01Z
33,992,733
7
2015-11-30T06:34:42Z
[ "python", "theano" ]
I'm new to Theano and I wonder how to configure the default setting directly from script (without setting envir. variables). E.g. this is a working solution ([source](http://deeplearning.net/software/theano/tutorial/using_gpu.html)): ``` $ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py ``` I in...
When you do this: ``` $ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py ``` All you're actually doing is setting an environment variable before running the Python script. You can set environment variables in Python too. For example, the `THEANO_FLAGS` environment variable can be set inside Pyth...
Is there a filter() opposite builtin?
33,989,155
2
2015-11-29T22:56:49Z
33,989,179
8
2015-11-29T22:59:48Z
[ "python", "functional-programming" ]
Is there a function in Python that does the opposite of `filter`? I.e. keeps the items in the iterable that the callback returns `False` for? Couldn't find anything.
No, there is no built-in inverse function for `filter()`, because you could simply *invert the test*. Just add `not`: ``` positive = filter(lambda v: some_test(v), values) negative = filter(lambda v: not some_test(v), values) ``` The `itertools` module does have [`itertools.ifilterfalse()`](https://docs.python.org/2/...
Python: gensim: RuntimeError: you must first build vocabulary before training the model
33,989,826
5
2015-11-30T00:30:37Z
33,991,111
13
2015-11-30T03:43:23Z
[ "python", "gensim", "word2vec" ]
I know that this question has been asked already, but I was still not able to find a solution for it. I would like to use gensim's `word2vec` on a custom data set, but now I'm still figuring out in what format the dataset has to be. I had a look at [this post](http://streamhacker.com/2014/12/29/word2vec-nltk/) where t...
Default `min_count` in gensim's Word2Vec is set to 5. If there is no word in your vocab with frequency greater than 4, your vocab will be empty and hence the error. Try ``` voc_vec = word2vec.Word2Vec(vocab, min_count=1) ```
AttributeError: Unknown property color_cycle
33,995,707
4
2015-11-30T09:53:54Z
34,629,029
8
2016-01-06T08:55:37Z
[ "python", "pandas", "matplotlib" ]
I am learning 'pandas' and trying to plot `id` column but I get an error `AttributeError: Unknown property color_cycle` and empty graph. The graph only appears in interactive shell. When I execute as script I get same error except the graph doesn't appear. Below is the log: ``` >>> import pandas as pd >>> pd.set_opti...
There's currently a bug in Pandas 0.17.1 with Matplotlib 1.5.0 ``` print pandas.__version__ print matplotlib.__version__ ``` Instead of using ``` import pandas as pd pd.set_option('display.mpl_style', 'default') ``` Use: ``` import matplotlib matplotlib.style.use('ggplot') ```
Hash for lambda function in Python
33,998,594
21
2015-11-30T12:22:33Z
33,998,809
35
2015-11-30T12:34:23Z
[ "python", "python-2.7", "hash", "lambda" ]
I'm trying to get the hash of a lambda function. Why do I get two values (8746164008739 and -9223363290690767077)? Why is the hash from the lambda function not always one value? ``` >>> fn = lambda: 1 >>> hash(fn) -9223363290690767077 >>> fn = lambda: 1 >>> hash(fn) 8746164008739 >>> fn = lambda: 1 >>> hash(fn) -92233...
Two objects are not guaranteed to hash to the same value unless they compare equal [1]. Python functions (including lambdas) don't compare equal even if they have identical code [2]. For example: ``` >>> (lambda: 1) == (lambda: 1) False ``` Implementation-wise, this behaviour is due to the fact that function objects...
Hash for lambda function in Python
33,998,594
21
2015-11-30T12:22:33Z
33,998,928
10
2015-11-30T12:41:08Z
[ "python", "python-2.7", "hash", "lambda" ]
I'm trying to get the hash of a lambda function. Why do I get two values (8746164008739 and -9223363290690767077)? Why is the hash from the lambda function not always one value? ``` >>> fn = lambda: 1 >>> hash(fn) -9223363290690767077 >>> fn = lambda: 1 >>> hash(fn) 8746164008739 >>> fn = lambda: 1 >>> hash(fn) -92233...
The hash of a `lambda` function object is based on its memory address (in CPython this is what the `id` function returns). This means that any two function objects will have different hashes (assuming there are no hash collisions), even if the functions contain the same code. To explain what's happening in the questio...
How to fix FailedPreconditionError: Tensor Flow Tutorial Using numpy/pandas data
34,001,922
4
2015-11-30T14:41:20Z
34,013,098
10
2015-12-01T05:15:19Z
[ "python", "pandas", "classification", "tensorflow" ]
I am working through the [tensor flow tutorial](https://www.tensorflow.org/tutorials/mnist/pros/index.html), but am trying to use a numpy or pandas format for the data, so that I can compare it with Scikit-Learn results. I get the digit recognition data from kaggle - [here](https://www.kaggle.com/c/digit-recognizer/da...
The `FailedPreconditionError` arises because the program is attempting to read a variable (named `"Variable_1"`) before it has been initialized. In TensorFlow, all variables must be explicitly initialized, by running their "initializer" operations. For convenience, you can run all of the variable initializers in the cu...
Tensorflow slicing based on variable
34,002,591
4
2015-11-30T15:53:28Z
34,002,962
8
2015-11-30T16:11:55Z
[ "python", "tensorflow" ]
I've found that indexing still is an open issue in tensorflow [(#206)](https://github.com/tensorflow/tensorflow/issues/206), so I'm wondering what I could use as a workaround at the moment. I want to index/slice a row/column of a matrix based on a variable that changes for every training example. What I've tried so fa...
Slicing based on a placeholder should work just fine. It looks like you are running into a type error, due to some subtle issues of shapes and types. Where you have the following: ``` x = tf.placeholder("float") i = tf.placeholder("int32") y = tf.slice(x,[i],[1]) ``` ...you should instead have: ``` x = tf.placeholde...
GRU implementation in Theano
34,004,474
3
2015-11-30T17:30:23Z
34,011,303
7
2015-12-01T01:53:34Z
[ "python", "neural-network", "theano", "deep-learning", "gated-recurrent-unit" ]
Based on the LSTM code provided in the official Theano tutorial (<http://deeplearning.net/tutorial/code/lstm.py>), I changed the LSTM layer code (i.e. the functions `lstm_layer()` and `param_init_lstm()`) to perform a GRU instead. The provided LSTM code trains well, but not the GRU I coded: the accuracy on the trainin...
The issue comes from the last line, `return rval[0]`: it should instead be `return rval`. The LSTM code provided in the official Theano tutorial (<http://deeplearning.net/tutorial/code/lstm.py>) uses `return rval[0]` because `outputs_info` contains 2 elements: ``` rval, updates = theano.scan(_step, ...
Is this time complexity actually O(n^2)?
34,008,010
64
2015-11-30T21:06:28Z
34,008,199
58
2015-11-30T21:20:14Z
[ "python", "string", "algorithm", "string-concatenation" ]
I am working on a problem out of CTCI. The third problem of chapter 1 has you take a string such as `'Mr John Smith '` and asks you to replace the intermediary spaces with `%20`: `'Mr%20John%20Smith'` The author offers this solution in Python, calling it O(n): ``` def urlify(string, length): '''function repla...
In CPython, the standard implementation of Python, there's an implementation detail that makes this usually O(n), implemented in [the code the bytecode evaluation loop calls for `+` or `+=` with two string operands](https://hg.python.org/cpython/file/2.7/Python/ceval.c#l5109). If Python detects that the left argument h...
Is this time complexity actually O(n^2)?
34,008,010
64
2015-11-30T21:06:28Z
34,008,289
19
2015-11-30T21:26:11Z
[ "python", "string", "algorithm", "string-concatenation" ]
I am working on a problem out of CTCI. The third problem of chapter 1 has you take a string such as `'Mr John Smith '` and asks you to replace the intermediary spaces with `%20`: `'Mr%20John%20Smith'` The author offers this solution in Python, calling it O(n): ``` def urlify(string, length): '''function repla...
I found this snippet of text on [Python Speed > Use the best algorithms and fastest tools](https://wiki.python.org/moin/PythonSpeed): > String concatenation is best done with `''.join(seq)` which is an `O(n)` process. In contrast, using the `'+'` or `'+='` operators can result in an `O(n^2)` process because new string...
Is this time complexity actually O(n^2)?
34,008,010
64
2015-11-30T21:06:28Z
34,008,322
26
2015-11-30T21:28:37Z
[ "python", "string", "algorithm", "string-concatenation" ]
I am working on a problem out of CTCI. The third problem of chapter 1 has you take a string such as `'Mr John Smith '` and asks you to replace the intermediary spaces with `%20`: `'Mr%20John%20Smith'` The author offers this solution in Python, calling it O(n): ``` def urlify(string, length): '''function repla...
The author relies on an optimization that happens to be here, but is not explicitly dependable. `strA = strB + strC` is typically `O(n)`, making the function `O(n^2)`. However, it is pretty easy to make sure it the whole process is `O(n)`, use an array: ``` output = [] # ... loop thing output.append('%20') ...
Is "x < y < z" faster than "x < y and y < z"?
34,014,906
127
2015-12-01T07:31:35Z
34,015,448
108
2015-12-01T08:06:24Z
[ "python", "performance" ]
From [this page](https://wiki.python.org/moin/PythonSpeed#Take_advantage_of_interpreter_optimizations), we know that: > Chained comparisons are faster than using the `and` operator. > Write `x < y < z` instead of `x < y and y < z`. However, I got a different result testing the following code snippets: ``` $ python -...
The difference is that in `x < y < z` `y` is only evaluated once. This does not make a large difference if y is a variable, but it does when it is a function call, which takes some time to compute. ``` from time import sleep def y(): sleep(.2) return 1.3 %timeit 1.2 < y() < 1.8 10 loops, best of 3: 203 ms per ...
Is "x < y < z" faster than "x < y and y < z"?
34,014,906
127
2015-12-01T07:31:35Z
34,015,812
8
2015-12-01T08:28:02Z
[ "python", "performance" ]
From [this page](https://wiki.python.org/moin/PythonSpeed#Take_advantage_of_interpreter_optimizations), we know that: > Chained comparisons are faster than using the `and` operator. > Write `x < y < z` instead of `x < y and y < z`. However, I got a different result testing the following code snippets: ``` $ python -...
Since the difference in the output seem to be due to lack of optimization I think you should ignore that difference for most cases - it could be that the difference will go away. The difference is because `y` only should be evaluated once and that is solved by duplicating it on the stack which requires an extra `POP_TO...
Is "x < y < z" faster than "x < y and y < z"?
34,014,906
127
2015-12-01T07:31:35Z
34,023,747
21
2015-12-01T15:17:28Z
[ "python", "performance" ]
From [this page](https://wiki.python.org/moin/PythonSpeed#Take_advantage_of_interpreter_optimizations), we know that: > Chained comparisons are faster than using the `and` operator. > Write `x < y < z` instead of `x < y and y < z`. However, I got a different result testing the following code snippets: ``` $ python -...
*Optimal* bytecode for both of the functions you defined would be ``` 0 LOAD_CONST 0 (None) 3 RETURN_VALUE ``` because the result of the comparison is not used. Let's make the situation more interesting by returning the result of the comparison. Let's also have the result not be know...
Python reversing an UTF-8 string
34,015,615
4
2015-12-01T08:17:31Z
34,015,656
9
2015-12-01T08:19:49Z
[ "python", "string", "utf-8", "character", "reverse" ]
I'm currently learning Python and as a Slovenian I often use UTF-8 characters to test my programs. Normally everything works fine, but there is one catch that I can't overtake. Even though I've got encoding declared on the top of the file it fails when I try to reverse a string containing special characters ``` #-*- c...
Python 2 strings are *byte strings*, and UTF-8 encoded text uses multiple bytes per character. Just because your terminal manages to interpret the UTF-8 bytes as characters, doesn't mean that Python knows about what bytes form one UTF-8 character. Your bytestring consists of 6 bytes, every two bytes form one character...
SKlearn import MLPClassifier fails
34,016,238
24
2015-12-01T08:54:19Z
34,020,223
29
2015-12-01T12:14:39Z
[ "python", "scikit-learn", "neural-network" ]
I am trying to use the [multilayer perceptron](http://scikit-learn.org/dev/modules/neural_networks_supervised.html#classification) from scikit-learn in python. My problem is, that the import is not working. All other modules from scikit-learn are working fine. ``` from sklearn.neural_network import MLPClassifier ``` ...
`MLPClassifier` is not yet available in `scikit-learn` v0.17 (as of 1 Dec 2015). If you really want to use it you could clone `0.18dev` (however, I don't know how stable this branch currently is).
How to reverse a string using recursion?
34,023,003
4
2015-12-01T14:39:16Z
34,023,052
8
2015-12-01T14:41:31Z
[ "python", "string", "python-2.7", "recursion" ]
I'm trying out a simple program which would allow me to print out the reverse word of "computer". When I run my code, I received a runtime error *RuntimeError: maximum recursion depth exceeded in cmp* . May I know what had happen and how can I solve it? ``` def reverse(str1): if str1 == '': return str1 ...
The problem is here, ``` return reverse(str1[1:] + str1[0]) ``` You are concatenating the rest of the string with the first character and passing to the `reverse` function. So, the length of the string never reduces. It should have been ``` return reverse(str1[1:]) + str1[0] ``` Now, you are passing only the rest ...
Cumulative counts in NumPy without iteration
34,027,288
6
2015-12-01T18:16:08Z
34,027,666
8
2015-12-01T18:38:48Z
[ "python", "numpy" ]
I have an array like so: ``` a = np.array([0.1, 0.2, 1.0, 1.0, 1.0, 0.9, 0.6, 1.0, 0.0, 1.0]) ``` I'd like to have a running counter of **instances of 1.0** that **resets when it encounters a 0.0**, so the result would be: ``` [0, 0, 1, 2, 3, 3, 3, 4, 0, 1] ``` My initial thought was to use something like b = np.cu...
I think you could do something like ``` def rcount(a): without_reset = (a == 1).cumsum() reset_at = (a == 0) overcount = np.maximum.accumulate(without_reset * reset_at) result = without_reset - overcount return result ``` which gives me ``` >>> a = np.array([0.1, 0.2, 1.0, 1.0, 1.0, 0.9, 0.6, 1.0...
How does the min/max function on a nested list work?
34,050,113
19
2015-12-02T18:17:45Z
34,115,735
30
2015-12-06T09:04:15Z
[ "python", "list", "python-2.7", "python-2.x", "nested-lists" ]
Lets say, there is a nested list, like: ``` my_list = [[1, 2, 21], [1, 3], [1, 2]] ``` When the function `min()` is called on this: ``` min(my_list) ``` The output received is ``` [1, 2] ``` Why and How does it work? What are some use cases of it?
## ***How are lists and other sequences compared in Python?*** Lists (and other sequences) in Python are compared [lexicographically](https://docs.python.org/2/tutorial/datastructures.html#comparing-sequences-and-other-types) and not based on any other parameter. > Sequence objects may be compared to other objects wi...
Number list with no repeats and ordered
34,058,251
14
2015-12-03T05:05:02Z
34,058,322
8
2015-12-03T05:11:22Z
[ "python" ]
This code returns a list [0,0,0] to [9,9,9], which produces no repeats and each element is in order from smallest to largest. ``` def number_list(): b=[] for position1 in range(10): for position2 in range(10): for position3 in range(10): if position1<=position2 and position2...
Simply use list comprehension, one way to do it: ``` >>> [[x,y,z] for x in range(10) for y in range(10) for z in range(10) if x<=y and y<=z] [[0, 0, 0], [0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [0, 0, 5], [0, 0, 6], [0, 0, 7], [0, 0, 8], [0, 0, 9], [0, 1, 1], [0, 1, 2], [0, 1, 3], [0, 1, 4], [0, 1, 5], [0, 1,...
Number list with no repeats and ordered
34,058,251
14
2015-12-03T05:05:02Z
34,058,339
11
2015-12-03T05:12:53Z
[ "python" ]
This code returns a list [0,0,0] to [9,9,9], which produces no repeats and each element is in order from smallest to largest. ``` def number_list(): b=[] for position1 in range(10): for position2 in range(10): for position3 in range(10): if position1<=position2 and position2...
On the same note as the other [`itertools`](https://docs.python.org/3/library/itertools.html) answer, there is another way with [`combinations_with_replacement`](https://docs.python.org/3/library/itertools.html#itertools.combinations_with_replacement): ``` list(itertools.combinations_with_replacement(range(10), 3)) ``...
how to set different PYTHONPATH variables for python3 and python2 respectively
34,066,261
11
2015-12-03T12:35:09Z
34,066,989
7
2015-12-03T13:12:47Z
[ "python", "pythonpath" ]
I want to add a specific library path only to python2. After adding `export PYTHONPATH="/path/to/lib/"` to my `.bashrc`, however, executing python3 gets the error: Your PYTHONPATH points to a site-packages dir for Python 2.x but you are running Python 3.x! I think it is due to that python2 and python3 share the common...
`PYTHONPATH` is somewhat of a hack as far as package management is concerned. A "pretty" solution would be to *package* your library and *install* it. This could sound more tricky than it is, so let me show you how it works. Let us assume your "package" has a single file named `wow.py` and you keep it in `/home/user/...
Difference between writing something on one line and on several lines
34,069,542
5
2015-12-03T15:16:27Z
34,069,705
13
2015-12-03T15:23:02Z
[ "python" ]
Where is the difference when I write something on one line, seperated by a `,` and on two lines. Apparently I do not understand the difference, because I though the two functions below should return the same. ``` def fibi(n): a, b = 0, 1 for i in range(n): a, b = b, a + b return a print(fibi(6)) >...
This is because of Python's tuple unpacking. In the first one, Python collects the values on the right, makes them a tuple, then assigns the values of the tuple individually to the names on the left. So, if a == 1 and b == 2: ``` a, b = b, a + b => a, b = (2, 3) => a = 2, b = 3 ``` But in the second example, it's ...
Install Plotly in Anaconda
34,072,117
5
2015-12-03T17:11:05Z
34,073,946
8
2015-12-03T18:54:09Z
[ "python", "plot", "anaconda", "conda" ]
**How to install Plotly in Anaconda?** The <https://conda.anaconda.org/plotly> says to `conda install -c https://conda.anaconda.org/plotly <package>`, and The <https://plot.ly/python/user-guide/> says to `pip install plotly`. I.e., without package. So **which packages I should specify in Anaconda conda?** I tried w...
If you don't care which version of Plotly you install, just use `pip`. `pip install plotly` is an easy way to install the latest stable package for Plotly from PyPi. `pip` is a useful package and dependency management tool, which makes these things easy, but it should be noted that Anaconda's `conda` tool will do the...
Tracing code execution in embedded Python interpreter
34,075,757
8
2015-12-03T20:42:28Z
34,076,235
8
2015-12-03T21:12:45Z
[ "python", "c", "python-3.x", "python-c-api" ]
I'd like to create an application with embedded python interpreter and basic debugging capabilities. Now I'm searching the API for functions which I could use to **run code step-by-step and get the number of the current line of code** which is being (or is about to be) executed. Official Python docs seem a little unde...
> ``` > hello.c:5:26: error: unknown type name ‘PyFrameObject’ > ``` This error means that `PyFrameObject` has not been declared. I did a [Google search](https://www.google.com/search?q=pyframeobject&ie=utf-8&oe=utf-8) which showed me [frameobject.h](http://svn.python.org/projects/python/trunk/Include/frameobject....
How to change dataframe column names in pyspark?
34,077,353
11
2015-12-03T22:21:55Z
34,077,809
37
2015-12-03T22:54:58Z
[ "python", "apache-spark", "pyspark", "pyspark-sql" ]
I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command: ``` df.columns = new_column_name_list ``` However, the same doesn't work in pyspark dataframes created using sqlContext. The only solution I...
There are many ways to do that: * Option 1. Using [selectExpr](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=selectexpr#pyspark.sql.DataFrame.selectExpr). ``` data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], ["Name", "askdaosdka"]...
How to change dataframe column names in pyspark?
34,077,353
11
2015-12-03T22:21:55Z
36,302,241
7
2016-03-30T07:25:17Z
[ "python", "apache-spark", "pyspark", "pyspark-sql" ]
I come from pandas background and am used to reading data from CSV files into a dataframe and then simply changing the column names to something useful using the simple command: ``` df.columns = new_column_name_list ``` However, the same doesn't work in pyspark dataframes created using sqlContext. The only solution I...
``` df = df.withColumnRenamed("colName", "newColName").withColumnRenamed("colName2", "newColName2") ``` Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names.
Send email task with correct context
34,079,191
14
2015-12-04T01:03:52Z
34,191,943
9
2015-12-10T00:58:33Z
[ "python", "python-2.7", "flask", "celery", "celery-task" ]
This code is my celery worker script: ``` from app import celery, create_app app = create_app('default') app.app_context().push() ``` When I try to run the worker I will get into this error: ``` File "/home/vagrant/myproject/venv/app/mymail.py", line 29, in send_email_celery msg.html = render_template(template +...
Finally found what is the reason of the problem after some debug with this [code](https://github.com/miguelgrinberg/flasky-with-celery). I have a `app_context_processor` that will not return any result. ``` @mod.app_context_processor def last_reputation_changes(): if current_user: #code return dic...
Tensor with unspecified dimension in tensorflow
34,079,787
7
2015-12-04T02:11:33Z
34,082,273
13
2015-12-04T06:33:15Z
[ "python", "tensorflow" ]
I'm playing around with tensorflow and ran into a problem with the following code: ``` def _init_parameters(self, input_data, labels): # the input shape is (batch_size, input_size) input_size = tf.shape(input_data)[1] # labels in one-hot format have shape (batch_size, num_classes) num_classes = tf.sh...
As Ishamael says, all tensors have a static shape, which is known at graph construction time and accessible using [`Tensor.get_shape()`](http://www.tensorflow.org/api_docs/python/framework.html#Tensor.get_shape); and a dynamic shape, which is only known at runtime and is accessible by fetching the value of the tensor, ...
Why does heroku local:run wants to use the global python installation instead of the currently activated virtual env?
34,086,320
16
2015-12-04T10:35:32Z
34,151,405
10
2015-12-08T08:44:35Z
[ "python", "heroku", "virtualenv", "pythonpath" ]
Using Heroku to deploy our Django application, everything seems to work by the spec, except the `heroku local:run` command. We oftentimes need to run commands through Django's manage.py file. Running them on the **remote**, as one-off dynos, works flawlessly. To run them **locally**, we try: ``` heroku local:run pyth...
After contacting Heroku's support, we understood the problem. The support confirmed that `heroku local:run` should as expected use the currently active virtual env. The problem is a local configuration problem, due to our `.bashrc` content: `heroku local:run` sources `.bashrc` (and in our case, this was prepending $P...
Efficiently processing data in text file
34,087,263
5
2015-12-04T11:23:58Z
34,087,401
12
2015-12-04T11:31:03Z
[ "python", "file" ]
Lets assume I have a (text) file with the following structure (name, score): ``` a 0 a 1 b 0 c 0 d 3 b 2 ``` And so on. My aim is to sum the scores for every name and order them from highest score to lowest score. So in this case, I want the following output: ```...
Read all data into a dictionary: ``` from collections import defaultdict from operator import itemgetter scores = defaultdict(int) with open('my_file.txt') as fobj: for line in fobj: name, score = line.split() scores[name] += int(score) ``` and the sorting: ``` for name, score in sorted(scores.i...
Efficiently processing data in text file
34,087,263
5
2015-12-04T11:23:58Z
34,087,437
7
2015-12-04T11:32:52Z
[ "python", "file" ]
Lets assume I have a (text) file with the following structure (name, score): ``` a 0 a 1 b 0 c 0 d 3 b 2 ``` And so on. My aim is to sum the scores for every name and order them from highest score to lowest score. So in this case, I want the following output: ```...
This is a good use case for `collections.Counter`: ``` from collections import Counter scores = Counter() with open('my_file') as f: for line in f: key, score = line.split() scores.update({key: int(score)}) for key, score in scores.most_common(): print(key, score) ```
How do I log multiple very similar events gracefully in python?
34,090,999
5
2015-12-04T14:50:13Z
34,150,824
7
2015-12-08T08:03:43Z
[ "python", "exception", "logging" ]
With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the bac...
You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's [singletons](http://stackoverflow.com/questions/31875/is-there-a-simple-elegant-way-to-define-singletons-in-python) but I guess you may have an existing code base that uses logging. I'd also suggest...
How to add header row to a pandas DataFrame
34,091,877
8
2015-12-04T15:35:59Z
34,092,032
10
2015-12-04T15:43:00Z
[ "python", "csv", "pandas", "header" ]
I am reading a csv file into `pandas`. This csv file constists of four columns and some rows, but does not have a header row, which I want to add. I have been trying the following: ``` Cov = pd.read_csv("path/to/file.txt", sep='\t') Frame=pd.DataFrame([Cov], columns = ["Sequence", "Start", "End", "Coverage"]) Frame.to...
You can use `names` directly in the [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) > names : array-like, default None List of column names to use. If file > contains no header row, then you should explicitly pass header=None ``` Cov = pd.read_csv("path/to/file.txt", sep='\t',...