title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
PEP 0492 - Python 3.5 async keyword | 31,291,129 | 31 | 2015-07-08T11:19:26Z | 31,291,832 | 39 | 2015-07-08T11:52:10Z | [
"python",
"python-3.x",
"asynchronous",
"async-await",
"coroutine"
] | [PEP 0492](https://www.python.org/dev/peps/pep-0492/) adds the `async` keyword to Python 3.5.
How does Python benefit from the use of this operator? The example that is given for a coroutine is
```
async def read_data(db):
data = await db.fetch('SELECT ...')
```
According to the docs this achieves
> suspend[ing... | No, co-routines do not involve any kind of threads. Co-routines allow for *cooperative* multi-tasking in that each co-routine yields control voluntarily. Threads on the other hand switch between units at arbitrary points.
Up to Python 3.4, it was possible to write co-routines using *generators*; by using `yield` or `y... |
How to load an existing ipython notebook? | 31,292,739 | 10 | 2015-07-08T12:30:15Z | 31,292,812 | 19 | 2015-07-08T12:33:32Z | [
"python",
"ipython"
] | I'm missing something really obvious here but I want to load an existing .ipynb file in my own ipython session. I've tried the following:
```
$ ipython dream.ipynb
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/h... | You must [start `ipython notebook`](https://ipython.org/ipython-doc/3/notebook/notebook.html#starting-the-notebook-server), otherwise `ipython` tries to execute `dream.ipynb` as though it were a file containing Python code:
```
ipython notebook dream.ipynb
``` |
In python, why is s*3 faster than s+s+s? | 31,295,017 | 3 | 2015-07-08T14:01:07Z | 31,295,106 | 15 | 2015-07-08T14:05:03Z | [
"python",
"string",
"operators"
] | I was going through the google's python intro and came across the statement that `s * 3` is faster than doing `s + s + s` where `s` is of type `string`.
Any reason for that to happen?
I googled and found [which is faster s+='a' or s=s+'a' in python](http://stackoverflow.com/questions/27287428/which-is-faster-s-a-or-s... | Because `s * 3` is one operation, whereas `s + s + s` is two operations; it's really `(s + s) + s`, creating an additional string object that then gets discarded.
You can see the difference by using [`dis`](https://docs.python.org/2/library/dis.html) to look at the bytecode each generates:
`s + s + s`:
```
3 ... |
Python variables lose scope inside generator? | 31,298,428 | 10 | 2015-07-08T16:27:03Z | 31,298,828 | 7 | 2015-07-08T16:48:26Z | [
"python",
"scope",
"generator"
] | The code below returns `NameError: global name 'self' is not defined`. Why?
```
lengths = [3, 10]
self.fooDict = getOrderedDict(stuff)
if not all(0 < l < len(self.fooDict) for l in lengths):
raise ValueError("Bad lengths!")
```
Note that `self.fooDict` is an OrderedDict (imported from the collections library) th... | ## Short answer and workaround
You've run into a limitation of the debugger. Expressions entered into the debugger cannot use *non-locally scoped values* because the debugger cannot create the required closures.
You could instead create a *function* to run your generator, thus creating a new scope at the same time:
... |
Difference between [y for y in x.split('_')] and x.split('_') | 31,303,026 | 2 | 2015-07-08T20:26:59Z | 31,303,073 | 8 | 2015-07-08T20:29:44Z | [
"python",
"string",
"list",
"split"
] | I've found [this question](http://stackoverflow.com/q/3668964/1937994) and one thing in the original code bugs me:
```
>>> x="Alpha_beta_Gamma"
>>> words = [y for y in x.split('_')]
```
What's the point of doing this: `[y for y in x.split('_')]`? `split` already returns a list and items aren't manipulated in this lis... | You're correct; there's no point in doing that. However, it's often seen in combination with some kind of filter or other structure, such as `[y for y in x.split('_') if y.isalpha()]`. |
astropy.io fits efficient element access of a large table | 31,315,325 | 2 | 2015-07-09T10:54:41Z | 31,319,385 | 7 | 2015-07-09T13:44:39Z | [
"python",
"arrays",
"fits",
"astropy"
] | I am trying to extract data from a binary table in a FITS file using Python and astropy.io. The table contains an events array with over 2 million events. What I want to do is store the TIME values of certain events in an array, so I can then do analysis on that array. The problem I have is that, whereas in fortran (us... | You need to do this using numpy vector operations. Without special tools like numba, doing large loops like you've done will always be slow in Python because it is an interpreted language. Your program should look more like:
```
energy = events['PI'] / 1000.
e_ok = (energy > min_energy) & (energy < max_energy)
rad2 = ... |
Why doesn't the value in for loop change? | 31,315,514 | 3 | 2015-07-09T11:02:16Z | 31,315,569 | 8 | 2015-07-09T11:04:32Z | [
"python"
] | Why does the value of `range(len(whole)/2)` not change after `whole` is modified? And what do you call `range(len...)` value in for-loop?
```
whole = 'selenium'
for i in range(len(whole)/2):
print whole
whole = whole[1:-1]
```
output:
```
selenium
eleniu
leni
en
``` | The `range()` produces a list of integers *once*. That list is then iterated over by the `for` loop. It is not re-created each iteration; that'd be very inefficient.
You could use a `while` loop instead:
```
i = 0
while i < (len(whole) / 2):
print whole
whole = whole[1:-1]
i += 1
```
the `while` conditio... |
Setting DataFrame values with enlargement | 31,319,888 | 7 | 2015-07-09T14:05:01Z | 36,555,489 | 12 | 2016-04-11T17:37:05Z | [
"python",
"pandas"
] | I have two `DataFrames` (with `DatetimeIndex`) and want to update the first frame (the older one) with data from the second frame (the newer one).
The new frame may contain more recent data for rows already contained in the the old frame. In this case, data in the old frame should be overwritten with data from the new... | `df2.combine_first(df1)` ([documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html))
seems to serve your requirement; PFB code snippet & output
```
import pandas as pd
print 'pandas-version: ', pd.__version__
df1 = pd.DataFrame.from_records([('2015-07-09 12:00:00',1,1... |
Doc2vec : How to get document vectors | 31,321,209 | 15 | 2015-07-09T14:57:45Z | 31,325,297 | 17 | 2015-07-09T18:19:46Z | [
"python",
"gensim",
"word2vec"
] | How to get document vectors of two text documents using Doc2vec?
I am new to this, so it would be helpful if someone could point me in right direction/help me with some tutorial
I am using gensim python library.
```
doc1=["This is a sentence","This is another sentence"]
documents1=[doc.strip().split(" ") for doc in d... | ```
doc=["This is a sentence","This is another sentence"]
documents=[doc.strip().split(" ") for doc in doc1 ]
model = doc2vec.Doc2Vec(documents, size = 100, window = 300, min_count = 10, workers=4)
```
I got AttributeError: 'list' object has no attribute 'words' because the input documents to the Doc2vec() was not in ... |
Doc2vec : How to get document vectors | 31,321,209 | 15 | 2015-07-09T14:57:45Z | 33,403,307 | 17 | 2015-10-28T23:21:45Z | [
"python",
"gensim",
"word2vec"
] | How to get document vectors of two text documents using Doc2vec?
I am new to this, so it would be helpful if someone could point me in right direction/help me with some tutorial
I am using gensim python library.
```
doc1=["This is a sentence","This is another sentence"]
documents1=[doc.strip().split(" ") for doc in d... | **Gensim was updated**. The syntax of LabeledSentence does not contain **labels**. There are now **tags** - see documentation for LabeledSentence <https://radimrehurek.com/gensim/models/doc2vec.html>
However, @bee2502 was right with
```
docvec = model.docvecs[99]
```
It will should the 100th vector's value for train... |
get_dummies python memory error | 31,321,892 | 3 | 2015-07-09T15:27:48Z | 31,324,037 | 12 | 2015-07-09T17:08:58Z | [
"python",
"pandas"
] | i relativly new to Python and i have a little problem with a data set. The data set has 400.000 rows and 300 variables. I have to get dummy variables for a categorical variable with 3000+ different items. At the end I want to end up with a data set with 3300 variables or features so that i can train RandomForest model.... | **update:** looks like get\_dummies is going to be returning integers by default, starting with version 0.19.0 (<https://github.com/pydata/pandas/issues/8725>)
Here are a couple of possibilities to try. Both will reduce the memory footprint of the dataframe substantially but you could still run into memory issues late... |
Scikit-learn: How to obtain True Positive, True Negative, False Positive and False Negative | 31,324,218 | 3 | 2015-07-09T17:19:02Z | 31,324,768 | 10 | 2015-07-09T17:50:02Z | [
"python",
"machine-learning",
"scikit-learn",
"classification",
"supervised-learning"
] | I am new in machine learning and in scikit-learn.
**My problem:**
(Please, correct any type of missconception)
I have a dataset which is a BIG JSON, I retrieve it and store it in a `trainList` variable.
I pre-process it in order to be able to work with it.
Once I have done that, I start the classification:
1. I u... | You can obtain all of the parameters from the confusion matrix.
The structure of the confusion matrix(which is 2X2 matrix) is as follows
```
TP|FP
FN|TN
```
So
```
TP = cm[0][0]
FP = cm[0][1]
FN = cm[1][0]
TN = cm[1][1]
```
More details at <https://en.wikipedia.org/wiki/Confusion_matrix> |
Finding gradient of a Caffe conv-filter with regards to input | 31,324,739 | 25 | 2015-07-09T17:48:20Z | 31,349,941 | 7 | 2015-07-10T20:42:29Z | [
"python",
"c++",
"neural-network",
"deep-learning",
"caffe"
] | I need to find the gradient with regards to the input layer for a single convolutional filter in a convolutional neural network (CNN) as a way to [visualize the filters](http://research.google.com/pubs/pub38115.html).
Given a trained network in the Python interface of [Caffe](http://caffe.berkeleyvision.org/) such as... | You can get the gradients in terms of any layer when you run the `backward()` pass. Just specify the list of layers when calling the function. To show the gradients in terms of the data layer:
```
net.forward()
diffs = net.backward(diffs=['data', 'conv1'])`
data_point = 16
plt.imshow(diffs['data'][data_point].squeeze(... |
Finding gradient of a Caffe conv-filter with regards to input | 31,324,739 | 25 | 2015-07-09T17:48:20Z | 31,847,179 | 15 | 2015-08-06T05:02:05Z | [
"python",
"c++",
"neural-network",
"deep-learning",
"caffe"
] | I need to find the gradient with regards to the input layer for a single convolutional filter in a convolutional neural network (CNN) as a way to [visualize the filters](http://research.google.com/pubs/pub38115.html).
Given a trained network in the Python interface of [Caffe](http://caffe.berkeleyvision.org/) such as... | Caffe net juggles two "streams" of numbers.
The first is the data "stream": images and labels pushed through the net. As these inputs progress through the net they are converted into high-level representation and eventually into class probabilities vectors (in classification tasks).
The second "stream" holds the pa... |
TypeError constructor returned NULL while importing pyplot in ssh | 31,328,436 | 6 | 2015-07-09T21:20:44Z | 31,328,665 | 7 | 2015-07-09T21:36:15Z | [
"python",
"windows",
"matplotlib",
"ssh"
] | I am having difficulties importing `matplotlib.pyplot` when I am using ssh to access my local lab cluster.
Indeed, trying to `import matplotlib.pyplot as plt` is giving me an error. I have tried to just `import matplotlib as mpl` and this is fine, so something specifically about importing `pyplot` is wrong here. The l... | You are failing to load GTK. Most likely, it is because you do not have access to an X11 server and can't draw windows (which is what GTK does).
Try `matplotlib.use("Pdf")` before importing `pyplot`. Then you won't need to load GTK and it won't fail. You can still make plots and save them to pdf but you can't draw the... |
Unicode Encode Error when writing pandas df to csv | 31,331,358 | 3 | 2015-07-10T02:09:04Z | 31,331,449 | 11 | 2015-07-10T02:23:38Z | [
"python",
"pandas",
"export-to-csv",
"python-unicode"
] | I cleaned 400 excel files and read them into python using pandas and appended all the raw data into one big df.
Then when I try to export it to a csv:
```
df.to_csv("path",header=True,index=False)
```
I get this error:
```
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc7' in position 20: ordinal not ... | You have `unicode` values in your DataFrame. Files store bytes, which means all `unicode` have to be encoded into bytes before they can be stored in a file. You have to specify an encoding, such as `utf-8`. For example,
```
df.to_csv('path', header=True, index=False, encoding='utf-8')
```
If you don't specify an enco... |
Adding a new column in Data Frame derived from other columns (Spark) | 31,333,437 | 3 | 2015-07-10T05:55:35Z | 31,338,072 | 7 | 2015-07-10T10:05:11Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I'm using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this,
```
>>old_df.columns
[col_1, col_2, ..., col_m]
>>new_df.columns
[col_1, col_2, ..., col_m, col_n]
```
where
```
col_n = col_3 - col_4
```
How do I do this in PySpark? | One way to achieve that is to use `withColumn` method:
```
old_df = sqlContext.createDataFrame(sc.parallelize(
[(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))
new_df = old_df.withColumn('col_n', old_df.col_1 - old_df.col_2)
```
Alternatively you can use SQL on a registered table:
```
old_df.registerTempTable('ol... |
Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method | 31,335,736 | 11 | 2015-07-10T08:16:04Z | 31,337,178 | 20 | 2015-07-10T09:25:38Z | [
"python",
"django",
"django-rest-framework"
] | I am getting the error ".accepted\_renderer not set on Response resp api django".
I am following the django rest-api tutorial.
Django version i am using 1.8.3
I followed the tutorial till first part. It worked properly. But when i continued the 2nd part in sending response, i got an error
```
Cannot apply DjangoModel... | You probably have set `DjangoModelPermissions` as a default permission class in your settings. Something like:
```
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.DjangoModelPermissions',
)
}
```
`DjangoModelPermissions` can only be applied to views that have a `.queryse... |
Cannot apply DjangoModelPermissions on a view that does not have `.queryset` property or overrides the `.get_queryset()` method | 31,335,736 | 11 | 2015-07-10T08:16:04Z | 31,338,276 | 8 | 2015-07-10T10:16:27Z | [
"python",
"django",
"django-rest-framework"
] | I am getting the error ".accepted\_renderer not set on Response resp api django".
I am following the django rest-api tutorial.
Django version i am using 1.8.3
I followed the tutorial till first part. It worked properly. But when i continued the 2nd part in sending response, i got an error
```
Cannot apply DjangoModel... | I got it working in another way.
My logged in user was the superuser which i have created.
So i have created another user from admin and made him staff user and provided all the permissions. Then logged in to admin by that user.
In settings.py file i changed code.
```
REST_FRAMEWORK = {
# Use Django's standard `d... |
Error handling methodology | 31,340,239 | 5 | 2015-07-10T12:00:39Z | 31,340,289 | 13 | 2015-07-10T12:03:02Z | [
"python"
] | In C, if I'm not wrong, the `main` function returns 0 if no errors occurred, and something different from 0 if an error occurs.
Is is appropriate to do the same in Python (as long as a function does not have to return any specific value but one to indicate the success/failure); or instead just handle exceptions? | In Python you shouldn't use the return value to indicate an error. You should use Exceptions.
So, either let the exception that fired bubble up, or throw a new one.
```
def check_foo(foo):
if foo == bar:
do_something(args)
try:
check_foo(...)
except SomeError:
# Oops! Failure!
something_went_... |
python error when initializing a class derived from and abstract one | 31,340,339 | 3 | 2015-07-10T12:05:11Z | 31,340,505 | 7 | 2015-07-10T12:13:19Z | [
"python",
"inheritance",
"abstract-base-class"
] | I have this simple code and I get a strange error:
```
from abc import ABCMeta, abstractmethod
class CVIterator(ABCMeta):
def __init__(self):
self.n = None # the value of n is obtained in the fit method
return
class KFold_new_version(CVIterator): # new version of KFold
def __init__(self, ... | You used the `ABCMeta` meta class incorrectly. It is a *meta* class, not a base class. Use it as such.
For Python 2, that means assigning it to the `__metaclass__` attribute on the class:
```
class CVIterator(object):
__metaclass__ = ABCMeta
def __init__(self):
self.n = None # the value of n is obtai... |
Python: why can I put mutable object in a dict or set? | 31,340,756 | 2 | 2015-07-10T12:26:42Z | 31,340,810 | 7 | 2015-07-10T12:29:31Z | [
"python",
"hash",
"immutability"
] | Given the following example,
```
class A(object):
pass
a = A()
a.x = 1
```
Obviously a is mutable, and then I put a in a set,
```
set([a])
```
It succeeded. Why I can put mutable object like "a" into a set/dict? Shouldn't set/dict only allow immutable objects so they can identify the object and avoid duplicatio... | Python doesn't test for *mutable* objects, it tests for *hashable* objects.
Custom class instances are by default hashable. That's fine because the default `__eq__` implementation for such classes only tests for instance *identity* and the hash is based of the same information.
In other words, it doesn't matter that ... |
MySQL Improperly Configured Reason: unsafe use of relative path | 31,343,299 | 21 | 2015-07-10T14:26:27Z | 31,821,332 | 58 | 2015-08-05T00:01:43Z | [
"python",
"mysql",
"django",
"dynamic-linking",
"osx-elcapitan"
] | I'm using Django, and when I run `python manage.py runserver` I receive the following error:
```
ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Library/Python/2.7/site-packages/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib
Referenced from: /Library/Python/2.7/site-packages/_mysql.so
Reaso... | In OS X El Capitan (10.11), Apple added [System Integrity Protection](https://support.apple.com/en-us/HT204899). This prevents programs in protected locations like /usr from calling a shared library that uses a relative reference to another shared library. In the case of \_mysql.so, it contains a relative reference to ... |
sql.h not found when installing PyODBC on Heroku | 31,353,137 | 4 | 2015-07-11T03:31:07Z | 31,360,218 | 15 | 2015-07-11T18:05:05Z | [
"python",
"heroku",
"pyodbc"
] | I'm trying to install PyODBC on Heroku, but I get `fatal error: sql.h: No such file or directory` in the logs when pip runs. How do I fix this error? | To follow up on the answer below...
Example for Ubuntu:
```
sudo apt-get install unixodbc unixodbc-dev
```
Example for CentOS:
```
sudo yum install unixODBC-devel
```
On Windows:
```
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=yourserver.yourcompany.com;DATABASE=yourdb;UID=user;PWD=password')
```
On Linux:... |
Django Registration Redux: how to change the unique identifier from username to email and use email as login | 31,356,535 | 3 | 2015-07-11T11:08:05Z | 31,358,213 | 7 | 2015-07-11T14:31:31Z | [
"python",
"django",
"django-registration"
] | I'm using Django-registration-redux in my project for user registration. It uses default User model which use username as the unique identifier.
Now we want to discard username and use email as the unique identifier.
And also we want to use email instead of username to login.
How to achieve this?
And is it possib... | You can override registration form like this
```
from registration.forms import RegistrationForm
class MyRegForm(RegistrationForm):
username = forms.CharField(max_length=254, required=False, widget=forms.HiddenInput())
def clean_email(self):
email = self.cleaned_data['email']
self.cleaned_data... |
Why does 'the' survive after .remove? | 31,356,546 | 8 | 2015-07-11T11:09:27Z | 31,356,575 | 14 | 2015-07-11T11:13:17Z | [
"python",
"string",
"python-2.7",
"python-3.x"
] | Something weird happens in this code:
```
fh = open('romeo.txt', 'r')
lst = list()
for line in fh:
line = line.split()
for word in line:
lst.append(word)
for word in lst:
numberofwords = lst.count(word)
if numberofwords > 1:
lst.remove(word)
lst.sort()
print len(lst)
print lst
```
... | In this loop:
```
for word in lst:
numberofwords = lst.count(word)
if numberofwords > 1:
lst.remove(word)
```
`lst` is modified while iterating over it. Don't do that. A simple fix is to iterate over a copy of it:
```
for word in lst[:]:
``` |
Error : "You are trying to add a non-nullable field" | 31,357,346 | 7 | 2015-07-11T12:48:43Z | 31,357,397 | 8 | 2015-07-11T12:54:29Z | [
"python",
"django"
] | I defined below model and getting
error : `You are trying to add a non-nullable field 'user' to videodata without a default; we can't do that`
models.py
```
class User(Model):
userID = models.IntegerField()
userName = models.CharField(max_length=40)
email = models.EmailField()
class Meta:
ord... | As the error says, your user field on VideoData is not allowing nulls, so you either need to give it a default user or allow nulls. Easiest way is to allow nulls.
```
user = models.ForeignKey(User, related_name='User', null=True)
```
or have a default user
```
user = models.ForeignKey(User, related_name='User', defa... |
Using Cloudfront with Django S3Boto | 31,357,353 | 14 | 2015-07-11T12:49:45Z | 31,440,339 | 23 | 2015-07-15T20:24:53Z | [
"python",
"django"
] | I have successfully set up my app to use S3 for storing all static and media files. However, I would like to upload to S3 (current operation), but serve from a cloudfront instance I have set up. I have tried adjusting settings to the cloudfront url but it does not work. How can I upload to S3 and serve from Cloudfront ... | Your code is almost complete except you are not adding your cloudfront domain to STATIC\_URL/MEDIA\_URL and your custom storages.
In detail, you must first install the dependencies
```
pip install django-storages-redux boto
```
Add the required settings to your django settings file
```
INSTALLED_APPS = (
...
... |
Format y axis as percent | 31,357,611 | 9 | 2015-07-11T13:21:01Z | 31,357,733 | 16 | 2015-07-11T13:36:31Z | [
"python",
"pandas",
"matplotlib",
"plot"
] | I have an existing plot that was created with pandas like this:
```
df['myvar'].plot(kind='bar')
```
The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and **I can only place code below the line above that creates the plot** (I cannot add ax=ax... | pandas dataframe plot will return the `ax` for you, And then you can start to manipulate the axes whatever you want.
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100,5))
# you get ax from here
ax = df.plot()
type(ax) # matplotlib.axes._subplots.AxesSubplot
# manipulate
vals = ax.get... |
Format y axis as percent | 31,357,611 | 9 | 2015-07-11T13:21:01Z | 35,446,404 | 15 | 2016-02-17T01:39:40Z | [
"python",
"pandas",
"matplotlib",
"plot"
] | I have an existing plot that was created with pandas like this:
```
df['myvar'].plot(kind='bar')
```
The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and **I can only place code below the line above that creates the plot** (I cannot add ax=ax... | [Jianxun](http://stackoverflow.com/users/5014134/jianxun-li)'s solution did the job for me but broke the y value indicator at the bottom left of the window.
I ended up using `FuncFormatter`instead (and also stripped the uneccessary trailing zeroes as suggested [here](http://stackoverflow.com/questions/14997799/most-py... |
Format y axis as percent | 31,357,611 | 9 | 2015-07-11T13:21:01Z | 36,319,915 | 7 | 2016-03-30T21:16:37Z | [
"python",
"pandas",
"matplotlib",
"plot"
] | I have an existing plot that was created with pandas like this:
```
df['myvar'].plot(kind='bar')
```
The y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and **I can only place code below the line above that creates the plot** (I cannot add ax=ax... | This is a few months late, but I have created [PR#6251](https://github.com/matplotlib/matplotlib/pull/6251) with matplotlib to add a new `PercentFormatter` class. With this class you just need one line to reformat your axis (two if you count the import of `matplotlib.ticker`):
```
import ...
import matplotlib.ticker a... |
How to mutate a list with a function in python? | 31,359,652 | 7 | 2015-07-11T17:06:02Z | 31,359,679 | 7 | 2015-07-11T17:09:02Z | [
"python",
"function",
"mutable"
] | Here's a pseudocode I've written describing my problem:-
```
func(s):
#returns a value of s
x = a list of strings
print func(x)
print x #these two should give the SAME output
```
When I print the value of x in the end, I want it to be the one returned by func(x). Can I do something like this only by editing the f... | That's already how it behaves, the function *can* mutate the list
```
>>> l = ['a', 'b', 'c'] # your list of strings
>>> def add_something(x): x.append('d')
...
>>> add_something(l)
>>> l
['a', 'b', 'c', 'd']
```
Note however that you cannot mutate the original list in this manner
```
def modify(x):
x = ['someth... |
Memory efficient sort of massive numpy array in Python | 31,359,980 | 12 | 2015-07-11T17:40:22Z | 31,362,871 | 10 | 2015-07-11T23:30:10Z | [
"python",
"performance",
"sorting",
"numpy",
"memory"
] | I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = `(868940742, 3)` which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processo... | At the moment each call to `np.argsort` is generating a `(868940742, 1)` array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of `full_arr` you are generating another `(868940742, 1)` array of floats, since [fancy indexing always returns a copy ra... |
How to get value from a theano tensor variable backed by a shared variable? | 31,361,377 | 10 | 2015-07-11T20:13:34Z | 31,362,146 | 10 | 2015-07-11T21:43:50Z | [
"python",
"numpy",
"scipy",
"theano"
] | I have a theano tensor variable created from casting a shared variable. How can I extract the original or casted values? (I need that so I don't have to carry the original shared/numpy values around.)
```
>>> x = theano.shared(numpy.asarray([1, 2, 3], dtype='float'))
>>> y = theano.tensor.cast(x, 'int32')
>>> y.get_va... | `get_value` only works for shared variables. `TensorVariables` are general expressions and thus potentially need extra input in order to be able to determine their value (Imagine you set `y = x + z`, where `z` is another tensor variable. You would need to specify `z` before being able to calculate `y`). You can either ... |
python dask DataFrame, support for (trivially parallelizable) row apply? | 31,361,721 | 17 | 2015-07-11T20:52:46Z | 31,364,127 | 18 | 2015-07-12T03:35:33Z | [
"python",
"pandas",
"parallel-processing",
"dask"
] | I recently found [dask](http://dask.pydata.org/en/latest/index.html) module that aims to be an easy-to-use python parallel processing module. Big selling point for me is that it works with pandas.
After reading a bit on its manual page, I can't find a way to do this trivially parallelizable task:
```
ts.apply(func) #... | ### `map_partitions`
You can apply your function to all of the partitions of your dataframe with the `map_partitions` function.
```
df.map_partitions(func, columns=...)
```
Note that func will be given only part of the dataset at a time, not the entire dataset like with `pandas apply` (which presumably you wouldn't ... |
Performance difference in pandas read_table vs. read_csv vs. from_csv vs. read_excel? | 31,362,573 | 3 | 2015-07-11T22:43:16Z | 31,362,987 | 12 | 2015-07-11T23:49:58Z | [
"python",
"performance",
"csv",
"pandas",
"dataframe"
] | I tend to import .csv files into pandas, but sometimes I may get data in other formats to make `DataFrame` objects.
Today, I just found out about `read_table` as a "generic" importer for other formats, and wondered if there were significant performance differences between the various methods in pandas for reading .csv... | 1. `read_table` is `read_csv` with `sep=','` replaced by `sep='\t'`, they are two thin wrappers around the same function so the performance will be identical. `read_excel` uses the `xlrd` package to read xls and xlsx files into a DataFrame, it doesn't handle csv files.
2. `from_csv` calls `read_table`, so no. |
graphite/carbon ImportError: No module named fields | 31,363,276 | 3 | 2015-07-12T00:44:55Z | 32,557,105 | 11 | 2015-09-14T03:58:37Z | [
"python",
"carbon",
"graphite",
"centos7"
] | I am able to follow almost all the instructions [here](http://www.unixmen.com/install-graphite-centos-7/)
but when I get to
```
[idf@node1 graphite]$ cd /opt/graphite/webapp/graphite/
[idf@node1 graphite]$ sudo python manage.py syncdb
Could not import graphite.local_settings, using defaults!
/opt/graphite/webapp/grap... | The issue was solved when the package `django-tagging` (0.3.6) was downgraded using the following commands:
```
pip uninstall django-tagging
pip install 'django-tagging<0.4'
``` |
How to sum values in an iterator in a PySpark groupByKey() | 31,366,307 | 2 | 2015-07-12T09:29:41Z | 31,366,342 | 8 | 2015-07-12T09:35:41Z | [
"python",
"apache-spark",
"pyspark"
] | I'm doing my first steps on Spark (python) and I'm struggling with an interator inside a groupByKey(), I'm not able to sum the values..something like this
```
example = sc.parallelize([('x',1), ('x',1), ('y', 1), ('z', 1)])
example.groupByKey()
x [1,1]
y [1]
z [1]
```
How to have the sum on iterator? I tried someth... | You can simply `mapValues` with `sum`:
```
example.groupByKey().mapValues(sum)
```
although in this particular case `reduceByKey` is much more efficient:
```
example.reduceByKey(lambda x, y: x + y)
```
or
```
from operator import add
example.reduceByKey(add)
``` |
Alternative to for loops | How to check if word contains part of a different word | 31,368,683 | 2 | 2015-07-12T14:11:23Z | 31,368,885 | 8 | 2015-07-12T14:33:40Z | [
"python",
"for-loop",
"set"
] | If you check the code below i used for loops to check if in a set of words, one word is the suffix of another.
My question is, how can i replace the double for loop? The guy who wrote the task mentioned that there is a solution using algorithms (not sure what's that :/ )
```
def checkio(words):
if len(words) == 1... | Let Python generate all combinations to be checked:
```
import itertools
def checkio(data):
return any((x.endswith(y) or y.endswith(x)) for x, y in itertools.combinations(data, 2))
```
And let Python test it:
```
assert checkio({"abc","cba","ba","a","c"}) == True
assert checkio({"walk", "duckwalk"}) == True
ass... |
How can I skip a migration with Django migrate command? | 31,369,466 | 3 | 2015-07-12T15:31:31Z | 31,369,615 | 8 | 2015-07-12T15:47:36Z | [
"python",
"django",
"django-models",
"django-migrations"
] | First, I am asking about Django migration introduced in 1.7, not `south`.
Suppose I have migrations `001_add_field_x`, `002_add_field_y`, and both of them are applied to database. Now I change my mind and decide to revert the second migration and replace it with another migration `003_add_field_z`.
In other words, I ... | You can use the `--fake` option.
Once you revert to `0001` you can run
```
python manage.py migrate <app> 0002 --fake
```
and then run
```
python manage.py migrate <app> #Optionally specify 0003 explicitly
```
which would apply only `0003` in this case.
If you do not want to follow this process for all the enviro... |
Calculate mean and median efficiently | 31,370,214 | 5 | 2015-07-12T16:50:25Z | 31,370,968 | 8 | 2015-07-12T18:10:52Z | [
"python",
"performance",
"numpy",
"mean",
"median"
] | What is the most efficient way to sequentially find the mean and median of rows in a Python list?
For example, my list:
```
input_list = [1,2,4,6,7,8]
```
I want to produce an output list that contains:
```
output_list_mean = [1,1.5,2.3,3.25,4,4.7]
output_list_median = [1,1.5,2.0,3.0,4.0,5.0]
```
Where the mean is... | Anything you do yourself, especially with the median, is either going to require a lot of work, or be very inefficient, but Pandas comes with built-in efficient implementations of the functions you are after, the expanding mean is O(n), the expanding median is O(n\*log(n)) using a skip list:
```
import pandas as pd
im... |
Reading JSON from SimpleHTTPServer Post data | 31,371,166 | 12 | 2015-07-12T18:31:50Z | 31,393,963 | 10 | 2015-07-13T21:38:00Z | [
"python",
"ajax",
"rest",
"simplejson",
"simplehttpserver"
] | I am trying to build a simple REST server with python SimpleHTTPServer. I am having problem reading data from the post message. Please let me know if I am doing it right.
```
from SimpleHTTPServer import SimpleHTTPRequestHandler
import SocketServer
import simplejson
class S(SimpleHTTPRequestHandler):
def _set_hea... | Thanks matthewatabet for the klein idea. I figured a way to implement it using BaseHTTPHandler. The code below.
```
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import SocketServer
import simplejson
import random
class S(BaseHTTPRequestHandler):
def _set_headers(self):
self.send_response(... |
reading an ascii file with headers given in the first rows into a pandas dataframe | 31,373,831 | 2 | 2015-07-13T00:13:16Z | 31,374,790 | 7 | 2015-07-13T02:53:30Z | [
"python",
"numpy",
"pandas",
"astropy"
] | I have a huge set of catalogues which have different columns and the different header names for each column, where the description for each header name is given as comments at the beginning of my ascii files in a row. What is the best way to read them into a `pandas.DataFrame` while it can set the name of the column as... | This is a file in Sextractor format. The `astropy.io.ascii` [reader](http://astropy.readthedocs.org/en/stable/io/ascii/index.html) understands this format natively so this is a snap to read:
```
>>> from astropy.io import ascii
>>> dat = ascii.read('table.dat')
>>> dat
<Table masked=False length=3>
MAG_AUTO rh ... |
Why doesn't the MySQLdb Connection context manager close the cursor? | 31,374,857 | 18 | 2015-07-13T03:05:06Z | 31,699,782 | 7 | 2015-07-29T11:57:16Z | [
"python",
"mysql",
"mysql-python",
"contextmanager"
] | MySQLdb `Connections` have a rudimentary context manager that creates a cursor on *enter*, either rolls back or commits on *exit*, and implicitly doesn't suppress exceptions. From the [Connection source](https://github.com/farcepest/MySQLdb1/blob/master/MySQLdb/connections.py):
```
def __enter__(self):
if self.get... | To answer your question directly: I cannot see any harm whatsoever in closing at the end of a `with` block. I cannot say why it is not done in this case. But, as there is a dearth of activity on this question, I had a search through the code history and will throw in a few thoughts (***guesses***) on why the `close()` ... |
Return Custom 404 Error when resource not found in Django Rest Framework | 31,380,280 | 7 | 2015-07-13T09:54:21Z | 31,381,075 | 9 | 2015-07-13T10:35:40Z | [
"python",
"django",
"rest",
"django-rest-framework"
] | I am learning [Django Rest Framework](http://www.django-rest-framework.org/), and also new to django. I want to return a custom `404` error in json when a client will access a resource which was not found.
My `urls.py` looks liks this:
```
urlpatterns = [
url(r'^mailer/$', views.Mailer.as_view(), name='send-email... | You are looking for [`handler404`](https://docs.djangoproject.com/en/1.8/ref/urls/#handler404).
Here is my suggestion:
1. Create a view that should be called if none of the URL patterns match.
2. Add `handler404 = path.to.your.view` to your root URLconf.
Here is how it's done:
1. `project.views`
```
from dja... |
how to export a table dataframe in pyspark to csv? | 31,385,363 | 6 | 2015-07-13T13:56:14Z | 31,386,290 | 13 | 2015-07-13T14:36:38Z | [
"python",
"sql",
"apache-spark",
"dataframe",
"export-to-csv"
] | I am using spark-1.3.1 (pyspark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it "table") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame "table" to a csv file?
Thanks! | If data frame fits in a driver memory you can convert [Spark DataFrame](https://github.com/apache/spark/blob/master/python/pyspark/sql/dataframe.py#L42) to local [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) using [`toPandas`](https://github.com/apache/spark/blob/master... |
Right Justify python | 31,389,267 | 4 | 2015-07-13T17:02:11Z | 31,389,332 | 9 | 2015-07-13T17:05:27Z | [
"python",
"python-3.x"
] | how can I justify the output of this code?
```
N = int(input())
case = '#'
print(case)
for i in range(N):
case += '#'
print(case)
``` | You can use `format` with `>` to right justify
```
N = 10
for i in range(1, N+1):
print('{:>10}'.format('#'*i))
```
Output
```
#
##
###
####
#####
######
#######
########
#########
##########
```
You can programattically figure out how far to right-justify using `rju... |
using sqlalchemy to load csv file into a database | 31,394,998 | 5 | 2015-07-13T23:09:43Z | 31,397,990 | 9 | 2015-07-14T05:00:57Z | [
"python",
"sqlalchemy"
] | I am trying to learn to program in python. I would like to us csv files in to a database. Is it a good idea to use sqlalchemy as framework to insert the the data.
each file is a database table. some of these files have foreign keys to other csv file / db tables.
Thanks ! | Because of the power of SQLAlchemy, I'm also using it on a project. It's power comes from the object-oriented way of "talking" to a database instead of hardcoding SQL statements that can be a pain to manage. Not to mention, it's also a lot faster.
To answer your question bluntly, yes! Storing data from a CSV into a da... |
how to write to a new cell in python using openpyxl | 31,395,058 | 2 | 2015-07-13T23:15:49Z | 31,395,124 | 8 | 2015-07-13T23:22:52Z | [
"python",
"excel",
"openpyxl"
] | I wrote code,
which opens an excel and iterates through each row and pass the value to another function.
```
import openpyxl
wb = load_workbook(filename='C:\Users\xxxxx')
for ws in wb.worksheets:
for row in ws.rows:
print row
x1=ucr(row[0].value)
row[1].value=x1 # i am having error at thi... | Try this:
```
import openpyxl
wb = load_workbook(filename='xxxx.xlsx')
ws = wb.worksheets[0]
ws['A1'] = 1
ws.cell(row=2, column=2).value = 2
ws.cell(coordinate="C3").value = 3 # 'coordinate=' is optional here
```
This will set Cells A1, B2 and C3 to 1, 2 and 3 respectively (three different ways of setting cell value... |
Error setting up Vagrant with VirtualBox in PyCharm under OS X 10.10 | 31,395,112 | 21 | 2015-07-13T23:21:17Z | 31,414,015 | 21 | 2015-07-14T18:01:59Z | [
"python",
"osx",
"vagrant",
"virtualbox",
"pycharm"
] | When setting up the remote interpreter and selecting Vagrant, I get the following error in PyCharm:
```
Can't Get Vagrant Settings: [0;31mThe provider 'virtualbox' that was requested to back the machine 'default' is reporting that it isn't usable on this system. The reason is shown bellow: Vagrant could not detect Vir... | Turns out, this problem is a known bug in PyCharm.
Until they fix it, you can get around the problem by launching PyCharm from a terminal window with the `charm` command.
[Vagrant 1.7.3 and VirtualBox 4.3.30 under Pycharm 4.5: Path issue](https://youtrack.jetbrains.com/issue/PY-16441) |
Error setting up Vagrant with VirtualBox in PyCharm under OS X 10.10 | 31,395,112 | 21 | 2015-07-13T23:21:17Z | 32,601,098 | 28 | 2015-09-16T06:20:06Z | [
"python",
"osx",
"vagrant",
"virtualbox",
"pycharm"
] | When setting up the remote interpreter and selecting Vagrant, I get the following error in PyCharm:
```
Can't Get Vagrant Settings: [0;31mThe provider 'virtualbox' that was requested to back the machine 'default' is reporting that it isn't usable on this system. The reason is shown bellow: Vagrant could not detect Vir... | Another workaround:
```
sudo ln -s /usr/local/bin/VBoxManage /usr/bin/VBoxManage
```
Edit:
Since it all worked some time ago, one of the following has to be cause of this problem:
* either update of VirtualBox changed location of it's executable
* or update of PyCharm changed PATH settings / executable location exp... |
Difficulty finding a Python 3.x implementation of the familiar C for-loop | 31,395,587 | 4 | 2015-07-14T00:14:30Z | 31,395,910 | 7 | 2015-07-14T00:55:03Z | [
"python",
"python-3.x"
] | I'm inexperienced in Python and started with Python 3.4.
I read over the Python 3.x documentation on [loop idioms](http://docs.python.org/release/3.4.0/tutorial/datastructures.html#tut-loopidioms), and haven't found a way of constructing a familiar C-family *for-loop*, i.e.
```
for (i = 0; i < n; i++) {
A[i... | ```
for lower <= var < upper:
```
That was [the proposed syntax](https://www.python.org/dev/peps/pep-0284/) for a C-style loop. I say "was the proposed syntax", because PEP 284 was rejected, because:
> Specifically, Guido did not buy the premise that the range() format needed fixing, "The whole point (15 years ago) o... |
ipython server can't launch: No module named notebook.notebookapp | 31,397,421 | 53 | 2015-07-14T04:02:59Z | 31,426,690 | 18 | 2015-07-15T09:34:36Z | [
"python",
"server",
"ipython"
] | I've been trying to setup an ipython server following several tutorials (since none was exactly my case). A couple days ago, I did manage to get it to the point where it was launching but then was not able to access it via url. Today it's not launching anymore and I can't find much about this specific error I get:
```... | I received the same problem when upgrading IPython. At the moment the answer was written, it was a bug linked to the latest `4` version. If a similar problem occurs for which you wish to switch back to the stable version `3.2.1`:
```
pip uninstall -y IPython
pip install ipython==3.2.1
```
* note: the `-y` option indi... |
ipython server can't launch: No module named notebook.notebookapp | 31,397,421 | 53 | 2015-07-14T04:02:59Z | 32,166,022 | 124 | 2015-08-23T11:15:49Z | [
"python",
"server",
"ipython"
] | I've been trying to setup an ipython server following several tutorials (since none was exactly my case). A couple days ago, I did manage to get it to the point where it was launching but then was not able to access it via url. Today it's not launching anymore and I can't find much about this specific error I get:
```... | This should fix the issue:
```
pip install jupyter
``` |
Scrapyd-deploy command not found after scrapyd installation | 31,398,348 | 5 | 2015-07-14T05:31:36Z | 31,419,370 | 9 | 2015-07-15T00:01:57Z | [
"python",
"web-scraping",
"scrapy",
"twisted",
"scrapyd"
] | I have created a couple of web spiders that I intend to run simultaneously with scrapyd. I first successfully installed scrapyd in Ubuntu 14.04 using the command:
pip install scrapyd, and when I run the command: scrapyd, I get the following output in the terminal:
```
2015-07-14 01:22:02-0400 [-] Log opened.
2015-07-1... | `scrapyd-deploy` is a part of [scrapyd-client](https://github.com/scrapy/scrapyd-client).You can install it from [PyPi](https://pypi.python.org/pypi/scrapyd-client/). Try:
```
$ sudo pip install scrapyd-client
``` |
Column filtering in PySpark | 31,400,143 | 5 | 2015-07-14T07:19:51Z | 31,403,594 | 12 | 2015-07-14T10:05:51Z | [
"python",
"lambda",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a dataframe `df` loaded from Hive table and it has a timestamp column, say `ts`, with string type of format `dd-MMM-yy hh.mm.ss.MS a` (converted to python datetime library, this is `%d-%b-%y %I.%M.%S.%f %p`).
Now I want to filter rows from the dataframe that are from the last five minutes:
```
only_last_5_minu... | It is possible to use user defined function.
```
from datetime import datetime, timedelta
from pyspark.sql.types import BooleanType, TimestampType
from pyspark.sql.functions import udf, col
def in_last_5_minutes(now):
def _in_last_5_minutes(then):
then_parsed = datetime.strptime(then, '%d-%b-%y %I.%M.%S.%... |
How can this code print Hello World without any print statement | 31,400,338 | 7 | 2015-07-14T07:29:18Z | 31,400,518 | 14 | 2015-07-14T07:38:43Z | [
"python"
] | I found this code in Python, which prints "Hello World" without the use of the string "Hello World". It's a one line code, a single expression (i.e. no print statement).
```
(lambda _, __, ___, ____, _____, ______, _______, ________: getattr(__import__(True.__class__.__name__[_] + [].__class__.__name__[__]), ().__clas... | The answer to the question as written: The code avoids a `print` statement by `os.write()`ing to `stdout`'s file descriptor, which is `1`:
```
getattr(__import__("os"), "write")(1, "Hello world!\n")
```
The rest of the explanation is detailed at <https://benkurtovic.com/2014/06/01/obfuscating-hello-world.html>. Inste... |
bounding box of numpy array | 31,400,769 | 7 | 2015-07-14T07:49:45Z | 31,402,351 | 8 | 2015-07-14T09:08:12Z | [
"python",
"arrays",
"numpy",
"transformation"
] | Suppose you have a 2D numpy array with some random values and surrounding zeros.
Example "tilted rectangle":
```
import numpy as np
from skimage import transform
img1 = np.zeros((100,100))
img1[25:75,25:75] = 1.
img2 = transform.rotate(img1, 45)
```
Now I want to find the smallest bounding rectangle for all the non... | You can roughly halve the execution time by using `np.any` to reduce the rows and columns that contain non-zero values to 1D vectors, rather than finding the indices of all non-zero values using `np.where`:
```
def bbox1(img):
a = np.where(img != 0)
bbox = np.min(a[0]), np.max(a[0]), np.min(a[1]), np.max(a[1])... |
How to remove parentheses only around single words in a string | 31,405,409 | 7 | 2015-07-14T11:32:43Z | 31,405,452 | 15 | 2015-07-14T11:34:49Z | [
"python",
"regex"
] | Let's say I have a string like this:
```
s = '((Xyz_lk) some stuff (XYZ_l)) (and even more stuff (XyZ))'
```
I would like to remove the parentheses only around single words so that I obtain:
```
'(Xyz_lk some stuff XYZ_l) (and even more stuff XyZ)'
```
How would I do this in Python? So far I only managed to remove ... | ```
re.sub(r'\((\w+)\)',r'\1',s)
```
Use `\1` or backreferencing. |
Determining implementation of Python at runtime? | 31,407,123 | 11 | 2015-07-14T12:51:40Z | 31,407,159 | 13 | 2015-07-14T12:52:57Z | [
"python"
] | I'm writing a piece of code that returns profiling information and it would be helpful to be able to dynamically return the implementation of Python in use.
Is there a Pythonic way to determine which implementation (e.g. Jython, PyPy) of Python my code is executing on at runtime? I know that I am able to get version i... | You can use `python_implementation` from the `platform` module in [Python 3](https://docs.python.org/3/library/platform.html#platform.python_implementation) or [Python 2](https://docs.python.org/2/library/platform.html#platform.python_implementation). This returns a string that identifies the Python implementation.
e.... |
Hiding lines after showing a pyplot figure | 31,410,043 | 3 | 2015-07-14T14:50:50Z | 31,417,070 | 7 | 2015-07-14T20:50:16Z | [
"python",
"matplotlib"
] | I'm using pyplot to display a line graph of up to 30 lines. I would like to add a way to quickly show and hide individual lines on the graph. Pyplot does have a menu where you can edit line properties to change the color or style, but its rather clunky when you want to hide lines to isolate the one you're interested in... | If you'd like, you can hook up a callback to the legend that will show/hide lines when they're clicked. There's a simple example here: <http://matplotlib.org/examples/event_handling/legend_picking.html>
Here's a "fancier" example that should work without needing to manually specify the relationship of the lines and le... |
Python: Why is popping off a queue faster than for-in block? | 31,414,011 | 9 | 2015-07-14T18:01:53Z | 31,414,080 | 12 | 2015-07-14T18:05:56Z | [
"python",
"for-loop",
"optimization",
"while-loop"
] | I have been working on a python script to analyze CSVs. Some of these files are fairly large (1-2 million records), and the script was taking hours to complete.
I changed the way the records are processed from a `for-in` loop to a `while` loop, and the speedup was remarkable. Demonstration below:
```
>>> def for_list... | `while_list` is mutating the global `data`. `timeit.timeit` does not reset the value of `data`. `timeit.timeit` calls `for_list` and `while_list` a million times each by default. After the first call to `while_list`, subsequent calls to `while_list` return after performing 0 loops because `data` is already empty.
You ... |
How to prepend a path to sys.path in Python? | 31,414,041 | 17 | 2015-07-14T18:03:05Z | 31,580,183 | 7 | 2015-07-23T06:55:16Z | [
"python",
"ubuntu",
"pip",
"easy-install",
"pythonpath"
] | **Problem description:**
Using pip, I upgraded to the latest version of [requests](http://docs.python-requests.org/en/latest/) (version 2.7.0, with `pip show requests` giving the location `/usr/local/lib/python2.7/dist-packages`). When I `import requests` and print `requests.__version__` in the interactive command lin... | You shouldn't need to mess with pip's path, python actually handles it's pathing automatically in my experience. It appears you have two pythons installed. If you type:
```
which pip
which python
```
what paths do you see? If they're not in the same /bin folder, then that's your problem. I'm guessing that the python ... |
ImportError: cannot import name wraps | 31,417,964 | 19 | 2015-07-14T21:46:55Z | 31,419,279 | 20 | 2015-07-14T23:50:22Z | [
"python",
"mocking",
"pyunit"
] | I'm using python 2.7.6 on Ubuntu 14.04.2 LTS. I'm using mock to mock some unittests and noticing when I import mock it fails importing wraps.
Not sure if there's a different version of mock or six I should be using for it's import to work? Couldn't find any relevant answers and I'm not using virtual environments.
moc... | Installed mock==1.0.1 and that worked for some reason. (shrugs)
edit: The real fix for me was to **updated setuptools** to the latest and it allowed me to upgrade mock and six to the latest. I was on setuptools 3.3. In my case I also had to remove said modules by hand because they were owned by OS in '/usr/local/lib/p... |
ImportError: cannot import name wraps | 31,417,964 | 19 | 2015-07-14T21:46:55Z | 31,739,766 | 15 | 2015-07-31T06:51:28Z | [
"python",
"mocking",
"pyunit"
] | I'm using python 2.7.6 on Ubuntu 14.04.2 LTS. I'm using mock to mock some unittests and noticing when I import mock it fails importing wraps.
Not sure if there's a different version of mock or six I should be using for it's import to work? Couldn't find any relevant answers and I'm not using virtual environments.
moc... | I encountered the same issue on my mac, which I was able to fix by realizing that my python's sys.path contained both
```
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/
```
and
```
/Library/Python/2.7/site-packages/
```
with the former earlier than the latter.
You can test if this is h... |
Python operator precedence - and vs greater than | 31,421,316 | 6 | 2015-07-15T04:06:54Z | 31,421,382 | 7 | 2015-07-15T04:13:31Z | [
"python",
"operator-precedence"
] | I have a line of code in my script that has both these operators chained together. From the documentation reference **BOOLEAN AND** has a lower precedence than **COMPARISON GREATER THAN**. I am getting unexpected results here in this code:
```
>>> def test(msg, value):
... print(msg)
... return value
>>> test... | Because you are looking at the wrong thing. `call` (or function call) takes higher precendence over both `and` as well as `>` (greater than) . So first function calls occur from left to right.
Python will get the results for all function calls before either comparison happens. The only thing that takes precendence ove... |
Why does "not(True) in [False, True]" return False? | 31,421,379 | 418 | 2015-07-15T04:12:58Z | 31,421,407 | 34 | 2015-07-15T04:17:14Z | [
"python",
"python-2.7",
"python-3.x"
] | If I do this:
```
>>> False in [False, True]
True
```
That returns `True`. Simply because `False` is in the list.
But if I do:
```
>>> not(True) in [False, True]
False
```
That returns `False`. Whereas `not(True)` is equal to `False`:
```
>>> not(True)
False
```
Why? | Operator precedence. `in` binds more tightly than `not`, so your expression is equivalent to `not((True) in [False, True])`. |
Why does "not(True) in [False, True]" return False? | 31,421,379 | 418 | 2015-07-15T04:12:58Z | 31,421,410 | 33 | 2015-07-15T04:17:25Z | [
"python",
"python-2.7",
"python-3.x"
] | If I do this:
```
>>> False in [False, True]
True
```
That returns `True`. Simply because `False` is in the list.
But if I do:
```
>>> not(True) in [False, True]
False
```
That returns `False`. Whereas `not(True)` is equal to `False`:
```
>>> not(True)
False
```
Why? | It's all about [operator precedence](https://docs.python.org/2/reference/expressions.html) (`in` is stronger than `not`). But it can be easily corrected by adding parentheses at the right place:
```
(not(True)) in [False, True] # prints true
```
writing:
```
not(True) in [False, True]
```
is the same like:
```
no... |
Why does "not(True) in [False, True]" return False? | 31,421,379 | 418 | 2015-07-15T04:12:58Z | 31,421,411 | 670 | 2015-07-15T04:17:28Z | [
"python",
"python-2.7",
"python-3.x"
] | If I do this:
```
>>> False in [False, True]
True
```
That returns `True`. Simply because `False` is in the list.
But if I do:
```
>>> not(True) in [False, True]
False
```
That returns `False`. Whereas `not(True)` is equal to `False`:
```
>>> not(True)
False
```
Why? | **Operator precedence** [2.x](https://docs.python.org/2/reference/expressions.html#operator-precedence), [3.x](https://docs.python.org/3/reference/expressions.html#operator-precedence). The precedence of `not` is lower than that of `in`. So it is equivalent to:
```
>>> not (True in [False, True])
False
```
This is wh... |
Why does "not(True) in [False, True]" return False? | 31,421,379 | 418 | 2015-07-15T04:12:58Z | 31,421,418 | 14 | 2015-07-15T04:18:32Z | [
"python",
"python-2.7",
"python-3.x"
] | If I do this:
```
>>> False in [False, True]
True
```
That returns `True`. Simply because `False` is in the list.
But if I do:
```
>>> not(True) in [False, True]
False
```
That returns `False`. Whereas `not(True)` is equal to `False`:
```
>>> not(True)
False
```
Why? | It is evaluating as `not True in [False, True]`, which returns `False` because `True` is in `[False, True]`
If you try
```
>>>(not(True)) in [False, True]
True
```
You get the expected result. |
Why does "not(True) in [False, True]" return False? | 31,421,379 | 418 | 2015-07-15T04:12:58Z | 31,421,636 | 67 | 2015-07-15T04:39:48Z | [
"python",
"python-2.7",
"python-3.x"
] | If I do this:
```
>>> False in [False, True]
True
```
That returns `True`. Simply because `False` is in the list.
But if I do:
```
>>> not(True) in [False, True]
False
```
That returns `False`. Whereas `not(True)` is equal to `False`:
```
>>> not(True)
False
```
Why? | **`not x in y`** is evaluated as **`x not in y`**
You can see exactly what's happening by disassembling the code. The first case works as you expect:
```
>>> x = lambda: False in [False, True]
>>> dis.dis(x)
1 0 LOAD_GLOBAL 0 (False)
3 LOAD_GLOBAL 0 (False)
... |
Why does "not(True) in [False, True]" return False? | 31,421,379 | 418 | 2015-07-15T04:12:58Z | 31,458,009 | 12 | 2015-07-16T15:07:36Z | [
"python",
"python-2.7",
"python-3.x"
] | If I do this:
```
>>> False in [False, True]
True
```
That returns `True`. Simply because `False` is in the list.
But if I do:
```
>>> not(True) in [False, True]
False
```
That returns `False`. Whereas `not(True)` is equal to `False`:
```
>>> not(True)
False
```
Why? | Along side the other answers that mentioned the precedence of `not` is lower than `in`, actually your statement is equivalent to :
```
not (True in [False, True])
```
But note that if you don't separate your condition from others python will use 2 role `precedence` or `chaining` to separate that,and in this case pyth... |
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn? | 31,421,413 | 33 | 2015-07-15T04:17:36Z | 31,558,398 | 9 | 2015-07-22T08:54:02Z | [
"python",
"machine-learning",
"nlp",
"artificial-intelligence",
"scikit-learn"
] | I'm working in a sentiment analysis problem the data looks like this:
```
label instances
5 1190
4 838
3 239
1 204
2 127
```
So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/module... | First of all it's a little bit harder using just counting analysis to tell if your data is unbalanced or not. For example: 1 in 1000 positive observation is just a noise, error or a breakthrough in science? You never know.
So it's always better to use all your available knowledge and choice its status with all wise.
... |
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn? | 31,421,413 | 33 | 2015-07-15T04:17:36Z | 31,570,518 | 12 | 2015-07-22T17:53:38Z | [
"python",
"machine-learning",
"nlp",
"artificial-intelligence",
"scikit-learn"
] | I'm working in a sentiment analysis problem the data looks like this:
```
label instances
5 1190
4 838
3 239
1 204
2 127
```
So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/module... | **Posed question**
Responding to the question 'what metric should be used for multi-class classification with imbalanced data': Macro-F1-measure.
Macro Precision and Macro Recall can be also used, but they are not so easily interpretable as for binary classificaion, they are already incorporated into F-measure, and ex... |
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn? | 31,421,413 | 33 | 2015-07-15T04:17:36Z | 31,575,870 | 39 | 2015-07-22T23:44:13Z | [
"python",
"machine-learning",
"nlp",
"artificial-intelligence",
"scikit-learn"
] | I'm working in a sentiment analysis problem the data looks like this:
```
label instances
5 1190
4 838
3 239
1 204
2 127
```
So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/module... | I think there is a lot of confusion about which weights are used for what. I am not sure I know precisely what bothers you so I am going to cover different topics, bear with me ;).
## Class weights
The weights from the `class_weight` parameter are used to **train the classifier**.
They **are not used in the calculati... |
How to compute precision, recall, accuracy and f1-score for the multiclass case with scikit learn? | 31,421,413 | 33 | 2015-07-15T04:17:36Z | 31,587,532 | 14 | 2015-07-23T12:35:37Z | [
"python",
"machine-learning",
"nlp",
"artificial-intelligence",
"scikit-learn"
] | I'm working in a sentiment analysis problem the data looks like this:
```
label instances
5 1190
4 838
3 239
1 204
2 127
```
So my data is unbalanced since 1190 `instances` are labeled with `5`. For the classification Im using scikit's [SVC](http://scikit-learn.org/stable/module... | Lot of very detailed answers here but I don't think you are answering the right questions. As I understand the question, there are two concerns:
1. How to I score a multiclass problem?
2. How do I deal with unbalanced data?
## 1.
You can use most of the scoring functions in scikit-learn with both multiclass problem ... |
Dictionary comprehension with inline functions | 31,421,638 | 10 | 2015-07-15T04:40:04Z | 31,421,720 | 12 | 2015-07-15T04:48:08Z | [
"python",
"dictionary",
"lambda"
] | I need to store functions in a dictionary, each function depending on its key, lets say, for a key `1` the lambda function associated is `lambda s: s * A[1]`. I tried with dict comprehension but it seems that the inline functions ends defined with the last value of the loop.
```
d = {k, lambda s: s * A[k] for k in ran... | A way to fix it is to change the code to:
```
d = {k: lambda s, k=k: s * A[k] for k in range(n)}
```
Without the binding, Python looks up the "current" `k` when each lambda is called, which is always `n-1` in the original code. |
Python in vs ==. Which to Use in this case? | 31,422,253 | 7 | 2015-07-15T05:38:36Z | 31,422,431 | 12 | 2015-07-15T05:53:07Z | [
"python"
] | I am making an **AJAX** call and passing variable `pub` in it which could be `1` or `0`.
As a beginner I want to be double sure of the variable type that is coming in. I am aware I can easily convert to `int()` and the problem is actually not with AJAX result but it led to this question.
My code:
```
if pub == 1 or ... | **Performance**: in is better
```
timeit.timeit("pub='1'; pub == 1 or pub == '1'")
0.07568907737731934
timeit.timeit("pub='1'; pub in[1, '1']")
0.04272890090942383
timeit.timeit("pub=1; pub == 1 or pub == '1'")
0.07502007484436035
timeit.timeit("pub=1; pub in[1, '1']")
0.07035684585571289
#other options
timeit.timeit... |
Finding the minimum length of multiple lists | 31,425,528 | 3 | 2015-07-15T08:42:33Z | 31,425,550 | 10 | 2015-07-15T08:43:34Z | [
"python",
"list",
"min"
] | I have three lists of different lengths.
For example
```
List1 is of length 40
List2 is of length 42
List3 is of length 47
```
How can I use the Python inbuilt `min()` or any other method to find the list with the minimum length?
I tried:
```
min(len([List1,List2,List3]))
```
but I get `TypeError: 'int' object i... | You need to apply `len()` to each list separately:
```
shortest_length = min(len(List1), len(List2), len(List3))
```
If you already have a sequence of the lists, you could use the [`map()` function](https://docs.python.org/2/library/functions.html#map) or a [generator expression](https://docs.python.org/2/tutorial/cl... |
Can I make an O(1) search algorithm using a sorted array with a known step? | 31,431,866 | 5 | 2015-07-15T13:32:13Z | 31,432,453 | 9 | 2015-07-15T13:56:17Z | [
"python",
"algorithm",
"python-2.7",
"matplotlib"
] | ## Background
my software visualizes *very* large datasets, e.g. the data is so large I can't store all the data in RAM at any one time it is required to be loaded in a page fashion. I embed `matplotlib` functionality for displaying and manipulating the plot in the backend of my application.
These datasets contains t... | The algorithm you suggest seems reasonable and like it would work.
As has become clear in your comments, the problem with it is the coarseness at which your time was recorded. (This can be common when unsynchronized data is recorded -- ie, the data generation clock, eg, frame rate, is not synced with the computer).
T... |
Force django-admin startproject if project folder already exists | 31,431,924 | 5 | 2015-07-15T13:34:45Z | 31,432,119 | 22 | 2015-07-15T13:41:50Z | [
"python",
"django",
"django-admin"
] | I want to start new django project in already existing folder and obviously get
```
CommandError: '/home/user/projectfolder' already exists.
```
Is there some way to force startproject command to create project in an existing folder? I have some important data in that folder and also git folder so I don't want to mov... | Just use the current directory:
`cd /home/user/projectfolder`
`django-admin.py startproject project .`
The use of `.` just instructs Django to create a project in the current directory while:
`django-admin.py startproject`
instructs Django to create a project and create the necessary directory
> If only the proje... |
How to find median using Spark | 31,432,843 | 9 | 2015-07-15T14:11:39Z | 31,437,177 | 15 | 2015-07-15T17:30:31Z | [
"python",
"apache-spark",
"median",
"rdd",
"pyspark"
] | How can I find median of a rdd of integers using a distributed method, IPython, and Spark? The rdd is approximately 700,000 elements and therefore too large to collect and find the median.
This question is similar to this question. However, the answer to the question is using Scala, which I do not know.
[How can I ca... | ### Spark 2.0+:
You can use `approxQuantile` method which implements [Greenwald-Khanna algorithm](http://infolab.stanford.edu/~datar/courses/cs361a/papers/quantiles.pdf):
**Python**:
```
df.approxQuantile("x", [0.5], 0.25)
```
**Scala**:
```
df.stat.approxQuantile("x", Array(0.5), 0.25)
```
where the last paramet... |
List of sets, set.add() is adding to all sets in the list | 31,440,056 | 2 | 2015-07-15T20:08:49Z | 31,440,120 | 8 | 2015-07-15T20:12:13Z | [
"python",
"list",
"set"
] | I'm trying to iterate through a spreadsheet and make a set of all the columns in there while adding the values to their respective set.
```
storage = [ set() ]*35 #there's 35 columns in the excel sheet
for line in in_file: #iterate through all the lines in the file
t = line.split('\t') #split the line by all the t... | ```
storage = [set()] * 35
```
This creates a list with the **same set** listed 35 times. To create a list with 35 different sets, use:
```
storage = [set() for i in range(35)]
```
This second form ensures `set()` is called multiple times. The first form only calls it once and then duplicates that single object refe... |
Requests works and URLFetch doesn't | 31,441,350 | 2 | 2015-07-15T21:20:29Z | 31,442,489 | 7 | 2015-07-15T22:41:43Z | [
"python",
"google-app-engine",
"python-requests",
"urlfetch"
] | I'm trying to make a request to the particle servers in python in a google app engine app.
In my terminal, I can complete the request simply and successfully with requests as:
```
res = requests.get('https://api.particle.io/v1/devices', params={"access_token": {ACCESS_TOKEN}})
```
But in my app, the same thing doesn... | In a nutshell, your problem is that in your `urlfetch` sample you're embedding your access token into the request body, and since you're issuing a GET request -which cannot carry any request body with them- this information gets discarded.
**Why does your first snippet work?**
Because `requests.get()` takes that opti... |
How can I make flycheck use virtualenv | 31,443,527 | 3 | 2015-07-16T00:35:23Z | 31,456,619 | 7 | 2015-07-16T14:08:45Z | [
"python",
"emacs",
"virtualenv",
"flycheck"
] | I have just happily configured emacs with autocompletion via jedi and syntax check via flycheck and virtualenvs created within bootstrap. It all seems to work.
I'd like to add the ability to use **flycheck-pylint** (to get errors in import) but I'm not able to make it work. Even if I change the virtualenv by hand (M-x... | Thanks to an answer from [Lunaryorn on github](https://github.com/flycheck/flycheck/issues/692) i realized there is also a flycheck-set-pylint-executable. Now all is working correctly whith the following configuration:
```
(defun set-flychecker-executables ()
"Configure virtualenv for flake8 and lint."
(when (get-... |
RuntimeError: working outside of application context | 31,444,036 | 7 | 2015-07-16T01:41:08Z | 31,444,175 | 14 | 2015-07-16T01:59:04Z | [
"python",
"mysql",
"flask",
"werkzeug",
"flask-restful"
] | **app.py**
```
from flask import Flask, render_template, request,jsonify,json,g
import mysql.connector
app = Flask(__name__)
**class TestMySQL():**
@app.before_request
def before_request():
try:
g.db = mysql.connector.connect(user='root', password='root', database='mysql')
except mysql.connector.er... | Flask has an [Application Context](http://flask.pocoo.org/docs/0.10/appcontext/#creating-an-application-context), and it seems like you'll need to do something like:
```
def test_connection(self):
with app.app_context():
#test code
```
You can probably also shove the `app.app_context()` call into a test s... |
Python cassandra driver: Invalid or unsupported protocol version: 4 | 31,444,098 | 5 | 2015-07-16T01:49:39Z | 31,462,161 | 12 | 2015-07-16T18:45:53Z | [
"python",
"amazon-web-services",
"cassandra"
] | I get the following error:
```
File "clear-domain-cass.py", line 25, in <module>
session = cluster.connect('my_domain')
File "/usr/lib/python2.6/dist-packages/cassandra/cluster.py", line 839, in connect
self.control_connection.connect()
File "/usr/lib/python2.6/dist-packages/cassandra/cluster.py", line ... | The version of the python driver you're using attempts to use the v4 native protocol by default, but Cassandra 2.1 only supports protocol versions 3 and lower. To tell the driver to use the v3 protocol, do the following:
```
cluster = Cluster(contact_points=[hostIp], protocol_version=3)
```
(By the way, the error mes... |
What does the -> (dash-greater-than arrow symbol) mean in a Python method signature? | 31,445,728 | 12 | 2015-07-16T05:07:59Z | 31,445,907 | 12 | 2015-07-16T05:25:42Z | [
"python",
"python-3.x"
] | There is a `->`, or dash-greater-than symbol at the end of a python method, and I'm not sure what it means. One might call it an arrow as well.
Here is the example:
```
@property
def get_foo(self) -> Foo:
return self._foo
```
where `self._foo` is an instance of Foo.
My guess is that it is some kind of static ty... | This is [function annotations](https://www.python.org/dev/peps/pep-3107/). It can be use to attach additional information to the [arguments](https://www.python.org/dev/peps/pep-3107/#id31) or a [return values](https://www.python.org/dev/peps/pep-3107/#id32) of functions. It is a useful way to say how a function must be... |
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? | 31,447,694 | 98 | 2015-07-16T07:18:19Z | 31,448,362 | 16 | 2015-07-16T07:52:06Z | [
"python",
"python-3.x"
] | Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? Is there a good reason? This inconsistency baffles me. (And we're talking about Python 3, which purposely broke backward compatibility in order to achieve goals like consistency.)
For example:
```
>>> from datetime import time
>>> ... | It's a special case (`"0"+`)
# [2.4.4. Integer literals](https://docs.python.org/3/reference/lexical_analysis.html#integer-literals)
```
Integer literals are described by the following lexical definitions:
integer ::= decimalinteger | octinteger | hexinteger | bininteger
decimalinteger ::= nonzerodigit digi... |
Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? | 31,447,694 | 98 | 2015-07-16T07:18:19Z | 31,448,530 | 96 | 2015-07-16T08:01:29Z | [
"python",
"python-3.x"
] | Why does Python 3 allow "00" as a literal for 0 but not allow "01" as a literal for 1? Is there a good reason? This inconsistency baffles me. (And we're talking about Python 3, which purposely broke backward compatibility in order to achieve goals like consistency.)
For example:
```
>>> from datetime import time
>>> ... | Per <https://docs.python.org/3/reference/lexical_analysis.html#integer-literals>:
> Integer literals are described by the following lexical definitions:
>
> ```
> integer ::= decimalinteger | octinteger | hexinteger | bininteger
> decimalinteger ::= nonzerodigit digit* | "0"+
> nonzerodigit ::= "1"..."9"
>... |
To check whether a number is multiple of second number | 31,449,216 | 2 | 2015-07-16T08:36:09Z | 31,449,252 | 8 | 2015-07-16T08:37:44Z | [
"python",
"numbers"
] | I want to check whether a number is multiple of second. What's wrong with the following code?
```
def is_multiple(x,y):
if x!=0 & (y%x)==0 :
print("true")
else:
print("false")
end
print("A program in python")
x=input("enter a number :")
y=input("enter its multiple :")
is_multiple(x,y)
```
er... | You are using the *binary AND operator* `&`; you want the *boolean AND operator* here, `and`:
```
x and (y % x) == 0
```
Next, you want to get your inputs converted to integers:
```
x = int(input("enter a number :"))
y = int(input("enter its multiple :"))
```
You'll get a `NameError` for that `end` expression on a ... |
C++ program taking minutes to parse large file whereas python is running in a few seconds | 31,456,277 | 2 | 2015-07-16T13:54:32Z | 31,457,227 | 8 | 2015-07-16T14:33:43Z | [
"python",
"c++",
"regex"
] | I am running a c++ program in VS. I provided a regex and I am parsing a file which is over 2 million lines long for strings that match that regex. Here is the code:
```
int main() {
ifstream myfile("file.log");
if (myfile.is_open())
{
int order_count = 0;
regex pat(R"(.*(SOME)(\s)*(TEXT).*)... | You should be using `regex_match`, not `regex_search`.
> ### [7.2.5.3. search() vs. match()](https://docs.python.org/2/library/re.html#search-vs-match)
>
> Python offers two different primitive operations based on regular expressions: re.match() checks for a match only at the beginning of the string, while re.search()... |
How do I protect urls I use internally for push queues in Google App Engine? | 31,456,321 | 2 | 2015-07-16T13:56:31Z | 31,456,488 | 7 | 2015-07-16T14:03:32Z | [
"python",
"security",
"google-app-engine"
] | I'm running Flask on GAE, and I'm working on implementing a push queue to run tasks for me in the background. Because GAE's push queues work by scheduling and sending http requests to my flask server, I'm concerned about my users guessing the urls I designated for internal use with my push queue. I thought about having... | You can protect your task urls by configuring them in app.yaml to use admin login
```
- url: /worker
......
login: admin
``` |
Can't instantiate abstract class ... with abstract methods | 31,457,855 | 4 | 2015-07-16T15:00:35Z | 31,458,576 | 7 | 2015-07-16T15:32:48Z | [
"python",
"abstract-class",
"abc",
"six"
] | I'm working on a kind of lib, and for a weird reason i have this error.
* [Here](https://github.com/josuebrunel/yahoo-fantasy-sport/blob/development/fantasy_sport/roster.py) is my code. Of course *@abc.abstractmethod have to be uncommented*
* [Here](https://github.com/josuebrunel/yahoo-fantasy-sport/blob/development/t... | Your issue comes because you have defined the abstract methods in your base abstract class with `__` (double underscore) prepended. This causes python to do [name mangling](https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references) at the time of definition of the classes.
The names ... |
Why does my Sieve of Eratosthenes work faster with integers than with booleans? | 31,459,623 | 13 | 2015-07-16T16:23:41Z | 31,459,730 | 13 | 2015-07-16T16:29:21Z | [
"python",
"performance",
"python-2.7",
"boolean",
"cpython"
] | I wrote a simple Sieve of Eratosthenes, which uses a list of ones and turns them into zeros if not prime, like so:
```
def eSieve(n): #Where m is fixed-length list of all integers up to n
'''Creates a list of primes less than or equal to n'''
m = [1]*(n+1)
for i in xrange(2,int((n)**0.5)+1):
if m[i... | This happens because `True` and `False` are looked up as globals in Python 2. The `0` and `1` literals are just constants, looked up by a quick array reference, while globals are *dictionary* lookups in the global namespace (falling through to the built-ins namespace):
```
>>> import dis
>>> def foo():
... a = Tru... |
Python calculating Catalan Numbers | 31,459,731 | 10 | 2015-07-16T16:29:27Z | 31,459,931 | 8 | 2015-07-16T16:38:59Z | [
"python",
"algorithm"
] | I have code which is calculating catalan numbers with method of Binominal Coefficients.
```
def BinominalCoefficient(n,k):
res = 1;
if (k > n - k):
k = n - k
for i in range(k):
res *= (n - i)
res /= (i + 1)
return res
def CatalanNumbers(n):
c = BinominalCoefficient(2*n, n)
... | I assume you're using Python 3.
Your `res /= (i + 1)` should be `res //= (i + 1)` to force integer arithmetic:
```
def BinominalCoefficient(n,k):
res = 1
if (k > n - k):
k = n - k
for i in range(k):
res *= (n - i)
res //= (i + 1)
return res
def CatalanNumbers(n):
c = Binomin... |
Make a number more probable to result from random | 31,462,265 | 21 | 2015-07-16T18:52:19Z | 31,462,320 | 8 | 2015-07-16T18:55:27Z | [
"python",
"numpy",
"random"
] | I'm using `x = numpy.random.rand(1)` to generate a random number between 0 and 1. How do I make it so that `x > .5` is 2 times more probable than `x < .5`? | ```
tmp = random()
if tmp < 0.5: tmp = random()
```
is pretty easy way to do it
ehh I guess this is 3x as likely ... thats what i get for sleeping through that class I guess
```
from random import random,uniform
def rand1():
tmp = random()
if tmp < 0.5:tmp = random()
return tmp
def rand2():
tmp = un... |
Make a number more probable to result from random | 31,462,265 | 21 | 2015-07-16T18:52:19Z | 31,462,327 | 26 | 2015-07-16T18:55:41Z | [
"python",
"numpy",
"random"
] | I'm using `x = numpy.random.rand(1)` to generate a random number between 0 and 1. How do I make it so that `x > .5` is 2 times more probable than `x < .5`? | That's a fitting name!
Just do a little manipulation of the inputs. First set `x` to be in the range from `0` to `1.5`.
```
x = numpy.random.uniform(1.5)
```
`x` has a `2/3` chance of being greater than `0.5` and `1/3` chance being smaller. Then if `x` is greater than `1.0`, subtract `.5` from it
```
if x >= 1.0:
... |
Make a number more probable to result from random | 31,462,265 | 21 | 2015-07-16T18:52:19Z | 31,463,931 | 15 | 2015-07-16T20:28:59Z | [
"python",
"numpy",
"random"
] | I'm using `x = numpy.random.rand(1)` to generate a random number between 0 and 1. How do I make it so that `x > .5` is 2 times more probable than `x < .5`? | This is overkill for you, but it's good to know an actual method for generating a random number with any probability density function (pdf).
You can do that by subclassing [scipy.stat.rv\_continuous](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.rv_continuous.html#scipy-stats-rv-continuous), p... |
Problems installing lxml in Ubuntu | 31,462,967 | 5 | 2015-07-16T19:32:37Z | 31,463,062 | 15 | 2015-07-16T19:37:50Z | [
"python",
"python-2.7",
"pip",
"lxml"
] | Getting the following errors when I do: **pip install lxml**
```
You are using pip version 6.0.8, however version 7.1.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting lxml
... | The output states `** make sure the development packages of libxml2 and libxslt are installed **`. Have you done that?
```
sudo apt-get install libxml2-dev libxslt-dev
```
Also, is there a particular reason you're install using pip instead of installing the `python-lxml` package that comes with Ubuntu? Installing you... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.