title stringlengths 12 150 | question_id int64 469 40.1M | question_score int64 2 5.52k | question_date stringdate 2008-08-02 15:11:16 2016-10-18 06:16:31 | answer_id int64 536 40.1M | answer_score int64 7 8.38k | answer_date stringdate 2008-08-02 18:49:07 2016-10-18 06:19:33 | tags listlengths 1 5 | question_body_md stringlengths 15 30.2k | answer_body_md stringlengths 11 27.8k |
|---|---|---|---|---|---|---|---|---|---|
Generating LMDB for Caffe | 33,627,888 | 4 | 2015-11-10T10:29:30Z | 33,628,970 | 7 | 2015-11-10T11:29:53Z | [
"python",
"deep-learning",
"caffe",
"lmdb",
"pycaffe"
] | I am trying to build a deep learning model for Saliency analysis using caffe (I am using the python wrapper). But I am unable to understand how to generate the lmdb data structure for this purpose. I have gone through the Imagenet and mnist examples and I understand that I should generate labels in the format
```
my_t... | You can approach this problem in two ways:
**1.** Using HDF5 data layer instead of LMDB. HDF5 is more flexible and can support labels the size of the image. You can see [this answer](http://stackoverflow.com/a/31808324/1714410) for an example of constructing and using HDF5 input data layer.
**2.** You can have two LM... |
How to print the value of a Tensor object in TensorFlow? | 33,633,370 | 41 | 2015-11-10T15:19:58Z | 33,633,839 | 40 | 2015-11-10T15:41:24Z | [
"python",
"tensorflow"
] | I have been using the introductory example of matrix multiplication in TensorFlow.
```
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
```
And when I print the product, it is displaying it as a TensorObject(obviously).
product
```
<tensorflow.python.framewo... | The easiest\* way to evaluate the actual value of a `Tensor` object is to pass it to the `Session.run()` method, or call `Tensor.eval()` when you have a default session (i.e. in a `with tf.Session():` block, or see below). In general,\*\* you cannot print the value of a tensor without running some code in a session.
I... |
How to print the value of a Tensor object in TensorFlow? | 33,633,370 | 41 | 2015-11-10T15:19:58Z | 33,640,758 | 9 | 2015-11-10T22:20:06Z | [
"python",
"tensorflow"
] | I have been using the introductory example of matrix multiplication in TensorFlow.
```
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
```
And when I print the product, it is displaying it as a TensorObject(obviously).
product
```
<tensorflow.python.framewo... | No, you can not see the content of the tensor without running the graph (doing `session.run()`). The only things you can see are:
* the dimensionality of the tensor (but I assume it is not hard to calculate it for the [list of the operations](http://tensorflow.org/api_docs/python/math_ops.md#contents) that TF has)
* t... |
How to print the value of a Tensor object in TensorFlow? | 33,633,370 | 41 | 2015-11-10T15:19:58Z | 36,296,783 | 24 | 2016-03-29T23:13:22Z | [
"python",
"tensorflow"
] | I have been using the introductory example of matrix multiplication in TensorFlow.
```
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
```
And when I print the product, it is displaying it as a TensorObject(obviously).
product
```
<tensorflow.python.framewo... | While other answers are correct that you cannot print the value until you evaluate the graph, they do not talk about one easy way of actually printing a value inside the graph, once you evaluate it.
The easiest way to see a value of a tensor whenever the graph is evaluated (using `run` or `eval`) is to use the [`Print... |
Python Error : ImportError: No module named 'xml.etree' | 33,633,954 | 2 | 2015-11-10T15:46:23Z | 33,634,074 | 7 | 2015-11-10T15:52:19Z | [
"python",
"xml"
] | I am simply trying to parse an XML file
```
import xml.etree.ElementTree as ET
tree = ET.parse('country_data.xml')
root = tree.getroot()
```
but this gives me
```
import xml.etree.ElementTree as ET
ImportError: No module named 'xml.etree'
```
i am using Python 3.5. I have tried to same code with Python 2.7 and 3.4 ... | Remove the file `xml.py` or a directory `xml` with a file `__init__.py` in it from your current directory and try again. Python will search the current directory first when importing modules. A file named `xml.py` or a package named `xml` in the current directory shadows the standard library package with the same name.... |
Pip not working on windows 10, freezes command promt | 33,638,395 | 5 | 2015-11-10T19:53:35Z | 33,928,569 | 15 | 2015-11-26T00:05:08Z | [
"python",
"windows",
"pip"
] | I recently installed Python for windows 10 and need to use pip command to install requests package.
However, whenever I try to use pip in cmd it just freezes my command prompt.
Using CTRL+C, CTRL+D or any command like that to cancel it does not work either, the prompt just freezes like its waiting for input or somethi... | I had exactly the same problem here (Windows 10.0.10240). After typing just "pip" and hitting enter, nothing else happened on the console. This problem was affecting including other .exe compiled python related scripts like mezzanine-project.exe.
The antivirus AVAST was the culprit (in my case) !!!
After disabling AV... |
When counting the occurrence of a string in a file, my code does not count the very first word | 33,640,387 | 4 | 2015-11-10T21:55:40Z | 33,640,520 | 7 | 2015-11-10T22:04:25Z | [
"python",
"string",
"file",
"text",
"readline"
] | ## Code
```
def main():
try:
file=input('Enter the name of the file you wish to open: ')
thefile=open(file,'r')
line=thefile.readline()
line=line.replace('.','')
line=line.replace(',','')
thefilelist=line.split()
thefilelistset=set(thefilelist)
d={}
for item in thefilelist:
... | My guess would be this line:
`wordcount=line.count(' '+item+' ')`
You are looking for "space" + YourWord + "space", and the first word is not preceded by space. |
How to do Xavier initialization on TensorFlow | 33,640,581 | 16 | 2015-11-10T22:07:54Z | 36,784,797 | 30 | 2016-04-22T04:23:57Z | [
"python",
"tensorflow"
] | I'm porting my Caffe network over to TensorFlow but it doesn't seem to have xavier initialization. I'm using `truncated_normal` but this seems to be making it a lot harder to train. | Now TensorFlow 0.8 has the xavier initializer implementation.
<https://www.tensorflow.org/versions/r0.8/api_docs/python/contrib.layers.html#xavier_initializer>
You can use something like this:
```
W = tf.get_variable("W", shape=[784, 256],
initializer=tf.contrib.layers.xavier_initializer())
``` |
Why does TensorFlow example fail when increasing batch size? | 33,641,799 | 11 | 2015-11-10T23:43:14Z | 33,644,778 | 19 | 2015-11-11T05:30:05Z | [
"python",
"tensorflow"
] | I was looking at the [Tensorflow MNIST example for beginners](http://tensorflow.org/tutorials/mnist/beginners/index.md) and found that in this part:
```
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
changing the batch size fr... | You're using the very basic linear model in the beginners example?
Here's a trick to debug it - watch the cross-entropy as you increase the batch size (the first line is from the example, the second I just added):
```
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
cross_entropy = tf.Print(cross_entropy, [cross_entropy]... |
Why does TensorFlow example fail when increasing batch size? | 33,641,799 | 11 | 2015-11-10T23:43:14Z | 33,645,235 | 10 | 2015-11-11T06:19:15Z | [
"python",
"tensorflow"
] | I was looking at the [Tensorflow MNIST example for beginners](http://tensorflow.org/tutorials/mnist/beginners/index.md) and found that in this part:
```
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
changing the batch size fr... | @dga gave a great answer, but I wanted to expand a little.
When I wrote the beginners tutorial, I implemented the cost function like so:
> cross\_entropy = -tf.reduce\_sum(y\_\*tf.log(y))
I wrote it that way because that looks most similar to the mathematical definition of cross-entropy. But it might actually be bet... |
Why does TensorFlow example fail when increasing batch size? | 33,641,799 | 11 | 2015-11-10T23:43:14Z | 34,364,526 | 11 | 2015-12-18T21:54:10Z | [
"python",
"tensorflow"
] | I was looking at the [Tensorflow MNIST example for beginners](http://tensorflow.org/tutorials/mnist/beginners/index.md) and found that in this part:
```
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
```
changing the batch size fr... | **Nan occurs when 0\*log(0) occurs:**
replace:
```
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
```
with:
```
cross_entropy = -tf.reduce_sum(y_*tf.log(y + 1e-10))
``` |
Tensorflow and Anaconda on Ubuntu? | 33,646,541 | 7 | 2015-11-11T08:15:16Z | 33,698,750 | 26 | 2015-11-13T17:40:22Z | [
"python",
"anaconda",
"tensorflow"
] | On my Ubuntu 14.04, I have installed tensorflow, using "pip", as specified in the [Tensorflow Installation instructions](http://tensorflow.org/get_started/os_setup.md) and I made sure it was working by importing it in python and it did work.
Then, I installed Anaconda and it changed my .bashrc file by adding the follo... | I solved the problem but in a different way!
I found a link where the tensorflow.whl files were converted to conda packages, so I went ahead and installed it using the command:
```
conda install -c https://conda.anaconda.org/jjhelmus tensorflow
```
and it worked, since the $PATH points to anaconda packages, I can imp... |
Why do we use name parameter when creating a variable in Tensorflow? | 33,648,167 | 5 | 2015-11-11T10:01:41Z | 33,648,339 | 15 | 2015-11-11T10:10:40Z | [
"python",
"tensorflow"
] | In some of the places I saw the syntax, where variables are initialized with names, sometimes without names. For [example](http://tensorflow.org/get_started/basic_usage.md):
```
# With name
var = tf.Variable(0, name="counter")
# Without
one = tf.constant(1)
```
But then everywhere in the code, it is referred only by ... | The `name` parameter is optional (you can create variables and constants with or without it), and the variable you use in your program does not depend on it. Names can be helpful in a couple of places:
**When you want to save or restore your variables** (you can [save them to a binary file](http://tensorflow.org/api_d... |
Tensorflow image reading & display | 33,648,322 | 11 | 2015-11-11T10:09:45Z | 33,862,534 | 17 | 2015-11-23T01:39:14Z | [
"python",
"tensorflow"
] | I've got a bunch of images in a format similar to Cifar10 (binary file, `size = 96*96*3` bytes per image), one image after another ([STL-10 dataset](http://cs.stanford.edu/~acoates/stl10/)). The file I'm opening has 138MB.
I tried to read & check the contents of the Tensors containing the images to be sure that the re... | Just to give a complete answer:
```
filename_queue = tf.train.string_input_producer(['/Users/HANEL/Desktop/tf.png']) # list of files to read
reader = tf.WholeFileReader()
key, value = reader.read(filename_queue)
my_img = tf.image.decode_png(value) # use png or jpg decoder based on your files.
init_op = tf.initiali... |
Simplify a if statement Python | 33,653,175 | 2 | 2015-11-11T14:54:38Z | 33,653,207 | 7 | 2015-11-11T14:56:10Z | [
"python",
"if-statement"
] | Is there a way to simplify this if-statement:
```
if self[by1,bx1]=='A' or self[by1,bx1+1]=='A' or self[by1,bx1+2]=='A' or self[by1,bx1+3]=='A':
```
coming from a class where self[y,x] fetch a data in a table.
The original code is:
```
for i in range(4):
if self[by1,bx1]=='A' or self[by1,bx1+1]=='A'... | Sure, you could use `any` for this. This should be equivalent.
```
if any(self[by1,bx1+x]=='A' for x in range(4)):
``` |
Error while importing Tensorflow in python2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found' | 33,655,731 | 21 | 2015-11-11T16:58:54Z | 34,897,674 | 7 | 2016-01-20T10:37:40Z | [
"python",
"ubuntu",
"glibc",
"tensorflow"
] | I have installed the Tensorflow bindings with python successfully. But when I try to import Tensorflow, I get the follwoing error.
> ImportError: /lib/x86\_64-linux-gnu/libc.so.6: version `GLIBC\_2.17' not
> found (required by
> /usr/local/lib/python2.7/dist-packages/tensorflow/python/\_pywrap\_tensorflow.so)
I have ... | I tried [BR\_User solution](http://stackoverflow.com/a/33699100/1990516) and still had an annoying:
```
ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found
```
I am on CentOS 6.7, it also lacks an updated c++ standard lib, so to build on BR\_User solution I extracted the correct libstdc++ packa... |
Error while importing Tensorflow in python2.7 in Ubuntu 12.04. 'GLIBC_2.17 not found' | 33,655,731 | 21 | 2015-11-11T16:58:54Z | 34,900,471 | 8 | 2016-01-20T12:45:41Z | [
"python",
"ubuntu",
"glibc",
"tensorflow"
] | I have installed the Tensorflow bindings with python successfully. But when I try to import Tensorflow, I get the follwoing error.
> ImportError: /lib/x86\_64-linux-gnu/libc.so.6: version `GLIBC\_2.17' not
> found (required by
> /usr/local/lib/python2.7/dist-packages/tensorflow/python/\_pywrap\_tensorflow.so)
I have ... | Okay so here is the other solution I mentionned in my [previous answer](http://stackoverflow.com/a/34897674/1990516), it's more tricky, but should always work on systems with GLIBC>=2.12 and GLIBCXX>=3.4.13.
In my case it was on a CentOS 6.7, but it's also fine for Ubuntu 12.04.
We're going to need a version of gcc th... |
Gunicorn Import by filename is not supported (module) | 33,655,841 | 5 | 2015-11-11T17:05:00Z | 33,656,476 | 8 | 2015-11-11T17:37:52Z | [
"python",
"containers",
"gunicorn"
] | i have newly created a container ubuntu and installed the installed the required packages in the virtual environment. Then i have executed the pre-existing python service code by python path/to/my/file/X.py (in virualenv) its working fine. so i have executed with gunicorn as gunicorn -b 0.0.0.0:5000 path/to/my/file/X:a... | It's just like the error says: you can't refer to Python modules by file path, you must refer to it by dotted module path starting at a directory that is in PYTHONPATH.
```
gunicorn -b 0.0.0.0:5000 path.inside.virtualenv.X:app
``` |
Unable to import Tensorflow "No module named copyreg" | 33,656,551 | 9 | 2015-11-11T17:42:11Z | 33,691,154 | 18 | 2015-11-13T10:56:51Z | [
"python",
"tensorflow"
] | El Capitan OS here. I've been trying to find a workaround with import Tensorflow into my ipython notebook, but so far no luck.
Like many people in the forums, I've also had issues with install tensorflow because of the six package. I was able to install after some fidgeting with brew
```
brew link gdbm
brew install p... | As Jonah commented, it's solved by this:
On MacOSX
If you encounter:
```
import six.moves.copyreg as copyreg
```
```
ImportError: No module named copyreg
```
Solution: TensorFlow depends on protobuf which requires six-1.10.0. Apple's default python environment has six-1.4.1 and may be difficult to upgrade. So we r... |
TensorFlow MNIST example not running with fully_connected_feed.py | 33,659,424 | 8 | 2015-11-11T20:40:17Z | 33,662,396 | 12 | 2015-11-12T00:26:11Z | [
"python",
"tensorflow"
] | I am able to run the `Deep MNIST Example` fine, but when running `fully_connected_feed.py`, I am getting the following error:
```
File "fully_connected_feed.py", line 19, in <module>
from tensorflow.g3doc.tutorials.mnist import input_data ImportError: No module named
g3doc.tutorials.mnist
```
I am new to Python so co... | This is a Python path issue. Assuming that the directory `tensorflow/g3doc/tutorials/mnist` is your current working directory (or in your Python path), the easiest way to resolve it is to change the following lines in fully\_connected\_feed.py from:
```
from tensorflow.g3doc.tutorials.mnist import input_data
from tens... |
Tensorflow causes logging messages to double | 33,662,648 | 4 | 2015-11-12T00:51:25Z | 33,664,610 | 8 | 2015-11-12T04:45:29Z | [
"python",
"logging",
"tensorflow"
] | So I was playing around with Google's [Tensorflow](http://www.tensorflow.org/) library they published yesterday and encountered an annoying bug that keeps biting me.
What I did was setup the python logging functions as I usually do, and the result was that, if I import the tensorflow library, all messages in the conso... | I get this output:
```
test
WARNING:TEST:test
```
Tensorflow is *also* using the logging framework and has set up its own handlers, so when you log, by default, it propagates up to the parent logging handlers inside tensorflow. You can change this behavior by setting:
```
logger.propagate = False
```
See also [dupl... |
How to create a Tensorflow Tensorboard Empty Graph | 33,663,081 | 7 | 2015-11-12T01:43:23Z | 33,685,349 | 12 | 2015-11-13T03:15:26Z | [
"python",
"tensorflow",
"tensorboard"
] | launch tensorboard with `tensorboard --logdir=/home/vagrant/notebook`
at tensorboard:6006 > graph, it says No graph definition files were found.
To store a graph, create a tf.python.training.summary\_io.SummaryWriter and pass the graph either via the constructor, or by calling its add\_graph() method.
```
import ten... | TensorBoard is a tool for [visualizing the TensorFlow graph](http://tensorflow.org/how_tos/graph_viz/index.md) and analyzing recorded metrics during training and inference. The graph is created using the Python API, then written out using the [`tf.train.SummaryWriter.add_graph()`](http://tensorflow.org/api_docs/python/... |
import input_data MNIST tensorflow not working | 33,664,651 | 8 | 2015-11-12T04:51:26Z | 33,664,749 | 12 | 2015-11-12T05:01:10Z | [
"python",
"import",
"machine-learning",
"tensorflow",
"mnist"
] | [TensorFlow MNIST example not running](http://stackoverflow.com/questions/33659424/tensorflow-mnist-example-not-running)
I checked this out and realized that `input_data` was not built-in. So I downloaded the whole folder from [here](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mn... | So let's assume that you are in the directory: `/somePath/tensorflow/tutorial` (and this is your working directory).
All you need to do is to download the [input\_data.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/input_data.py) and put it this. Let the file name where yo... |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,175 | 8 | 2015-11-12T10:22:20Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | Nice question. The `>>>` prompt is in `sys.ps1`, the `...` in `sys.ps2`. The next question would be how to change this randomly. Just as a demonstration of changing it by hand:
```
>>> import sys
>>> sys.ps1 = '<<<'
<<<sys.ps1 = '<<< '
<<< sys.ps2 = '.?. '
<<< for i in line:
.?.
``` |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,188 | 19 | 2015-11-12T10:23:13Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | For changing the prompt, we use
```
>>>import sys
>>>sys.ps1 = '=>'
=>
```
Now the way to do it randomly would be something like this:
```
import random
import sys
random_prompts = ['->', '-->', '=>', 'Hello->']
sys.ps1 = random.choice(random_prompts)
```
To execute this when your python interpreter starts, you ca... |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,342 | 19 | 2015-11-12T10:31:01Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | Try this:
```
>>> import sys
>>> import random
>>> class RandomPrompt(object):
... prompts = 'hello >', 'hi >', 'hey >'
... def __repr__ (self): return random.choice(self.prompts)
...
>>> sys.ps1 = RandomPrompt()
hello >1
1
hi >2
2
``` |
Randomly change the prompt in the Python interpreter | 33,668,998 | 35 | 2015-11-12T10:14:07Z | 33,669,360 | 65 | 2015-11-12T10:31:36Z | [
"python",
"prompt",
"python-interactive"
] | It's kind of boring to always see the `>>>` prompt in Python. What would be the best way to go about randomly changing the prompt prefix?
I imagine an interaction like:
```
This is a tobbaconist!>> import sys
Sorry?>> import math
Sorry?>> print sys.ps1
Sorry?
What?>>
``` | According to the [docs](https://docs.python.org/2/library/sys.html?highlight=sys#sys.ps1), if you assign a non-string object to `sys.ps1` then it will evaluate the `str` function of it each time:
> If a non-string object is assigned to either variable, its str() is
> re-evaluated each time the interpreter prepares to ... |
Python option parser overwrite '-h' | 33,669,665 | 4 | 2015-11-12T10:46:05Z | 33,669,714 | 7 | 2015-11-12T10:48:30Z | [
"python",
"elf"
] | I have the following options
```
parser = OptionParser()
parser.add_option('-a', '--all', action='store_true', dest='all', help='writes all header information')
parser.add_option('-h', '--file-header', action='store_true', dest='head', help='prints the elf file header information')
parser.add_option('-l', '--program-... | When creating the parser, pass `add_help_option=False`. So you will be able to define it by yourself:
```
parser = OptionParser(add_help_option=False)
``` |
Unexpected behaviour in numpy, when dividing arrays | 33,674,967 | 8 | 2015-11-12T15:22:54Z | 33,675,792 | 7 | 2015-11-12T15:59:32Z | [
"python",
"arrays",
"numpy",
"division"
] | So, in numpy 1.8.2 (with python 2.7.6) there seems to be an issue in array division. When performing in-place division of a sufficiently large array (at least 8192 elements, more than one dimension, data type is irrelevant) with a part of itself, behaviour is inconsistent for different notations.
```
import numpy as n... | It's not really a bug, as is to do with buffer size as you suggest in the question. Setting the buffer size larger gets rid of the problem (for now...):
```
>>> np.setbufsize(8192*4) # sets new buffer size, returns current size
8192
>>> # same set up as in the question
>>> np.sum(arr != arr_copy), arr.size - np.sum(n... |
python idiomatic python for loop if else statement | 33,678,809 | 7 | 2015-11-12T18:36:33Z | 33,678,825 | 11 | 2015-11-12T18:37:26Z | [
"python",
"if-statement",
"for-loop",
"idiomatic"
] | How can I use `else` statement in an idiomatic Python `for` loop? Without `else` I can write e.g.:
```
res = [i for i in [1,2,3,4,5] if i < 4]
```
The result is: `[1, 2, 3]`
The normal form of the above code is:
```
res = []
for i in [1,2,3,4,5]:
if i < 4:
res.append(i)
```
The result is the same as in... | You were close, you just have to move the ternary to the part of the list comprehension where you're creating the value.
```
res = [i if i < 4 else 0 for i in range(1,6)]
``` |
How to extend Python Enum? | 33,679,930 | 8 | 2015-11-12T19:42:07Z | 33,680,021 | 10 | 2015-11-12T19:46:34Z | [
"python",
"python-3.x",
"enums"
] | What is best practice for extending `Enum` type in Python 3.4 and is there even a possibility for do this?
For example:
```
from enum import Enum
class EventStatus(Enum):
success = 0
failure = 1
class BookingStatus(EventStatus):
duplicate = 2
unknown = 3
Traceback (most recent call last):
...
TypeError... | > Subclassing an enumeration is allowed only if the enumeration does not define any members.
>
> Allowing subclassing of enums that define members would lead to a violation of some important invariants of types and instances.
<https://docs.python.org/3/library/enum.html#restricted-subclassing-of-enumerations>
So **no... |
How do I check an input for an integer in Python? | 33,680,570 | 3 | 2015-11-12T20:19:46Z | 33,680,676 | 9 | 2015-11-12T20:26:42Z | [
"python",
"input"
] | I used:
```
day = int(input('Please input the day you were born: e.g 8th=8 21st = 21 : '))
month = int(input('Please input the month you were born: e.g may = 5 december = 12 : '))
year = int(input('Please input the year you were born: e.g 2001 / 1961 : '))
if day == int and month == int and year == int:
```
But it a... | ```
def get_int(p,error_msg="Thats Not An Int!"):
while True:
try:
return int(raw_input(p))
except (ValueError,TypeError):
print "ERROR: %s"%error_msg
day = get_int('Please input the day you were born: e.g 8th=8 21st = 21 : ')
#day is guaranteed to be an int
```
I like to... |
How do I add a new column to spark data frame (Pyspark)? | 33,681,487 | 9 | 2015-11-12T21:14:32Z | 33,683,462 | 28 | 2015-11-12T23:37:19Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a Spark data frame (using Pyspark 1.5.1) and would like to add a new column.
Tried the following without any success:
```
type(randomed_hours) # => list
#Create in Python and transform to RDD
new_col = pd.DataFrame(randomed_hours,columns=['new_col'])
spark_new_col = sqlContext.createDataFrame(new_col)
my_d... | You cannot add an arbitrary column to a `DataFrame` in Spark. New columns can be created only by using literals:
```
from pyspark.sql.functions import lit
df = sqlContext.createDataFrame(
[(1, "a", 23.0), (3, "B", -23.0)], ("x1", "x2", "x3"))
df_with_x4 = df.withColumn("x4", lit(0))
df_with_x4.show()
## +---+--... |
How do I add a new column to spark data frame (Pyspark)? | 33,681,487 | 9 | 2015-11-12T21:14:32Z | 37,263,999 | 8 | 2016-05-16T22:04:51Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | I have a Spark data frame (using Pyspark 1.5.1) and would like to add a new column.
Tried the following without any success:
```
type(randomed_hours) # => list
#Create in Python and transform to RDD
new_col = pd.DataFrame(randomed_hours,columns=['new_col'])
spark_new_col = sqlContext.createDataFrame(new_col)
my_d... | To add a column using a UDF:
```
df = sqlContext.createDataFrame(
[(1, "a", 23.0), (3, "B", -23.0)], ("x1", "x2", "x3"))
from pyspark.sql.functions import udf
from pyspark.sql.types import *
def valueToCategory(value):
if value == 1: return 'cat1'
elif value == 2: return 'cat2'
...
else: return 'n/... |
Tensorflow One Hot Encoder? | 33,681,517 | 10 | 2015-11-12T21:16:01Z | 33,681,732 | 7 | 2015-11-12T21:28:56Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] | Does tensorflow have something similar to scikit learn's [one hot encoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) for processing categorical data? Would using a placeholder of tf.string behave as categorical data?
I realize I can manually pre-process the data before ... | After looking though the [python documentation](http://tensorflow.org/api_docs/python/index.md), I have not found anything similar. One thing that strengthen my belief that it does not exist is that in [their own example](https://github.com/tensorflow/tensorflow/blob/1d76583411038767f673a0c96174c80eaf9ff42f/tensorflow/... |
Tensorflow One Hot Encoder? | 33,681,517 | 10 | 2015-11-12T21:16:01Z | 33,682,213 | 24 | 2015-11-12T21:57:38Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] | Does tensorflow have something similar to scikit learn's [one hot encoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) for processing categorical data? Would using a placeholder of tf.string behave as categorical data?
I realize I can manually pre-process the data before ... | As of TensorFlow 0.8, there is now a [native one-hot op, `tf.one_hot`](https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#one_hot) that can convert a set of sparse labels to a dense one-hot representation. This is in addition to [`tf.nn.sparse_softmax_cross_entropy_with_logits`](https://www.tenso... |
Issue feeding a list into feed_dict in TensorFlow | 33,684,657 | 7 | 2015-11-13T01:47:05Z | 33,685,256 | 12 | 2015-11-13T03:04:05Z | [
"python",
"tensorflow"
] | I'm trying to pass a list into feed\_dict, however I'm having trouble doing so. Say I have:
```
inputs = 10 * [tf.placeholder(tf.float32, shape=(batch_size, input_size))]
```
where inputs is fed into some function "outputs" that I want to compute. So to run this in tensorflow, I created a session and ran the followin... | There are two issues that are causing problems here:
The first issue is that the [`Session.run()`](http://tensorflow.org/api_docs/python/client.md#Session.run) call only accepts a small number of types as the keys of the `feed_dict`. In particular, lists of tensors are ***not*** supported as keys, so you have to put e... |
Why is behavior different with respect to global variables in "import module" vs "from module import * "? | 33,687,904 | 6 | 2015-11-13T07:45:24Z | 33,688,115 | 9 | 2015-11-13T08:02:26Z | [
"python",
"python-3.x"
] | Let's have a.py be:
```
def foo():
global spam
spam = 42
return 'this'
```
At a console, if I simply import a, things make sense to me:
```
>>> import a
>>> a.foo()
'this'
>>> a.spam
42
```
However, if I do the less popular thing and...
```
>>> from a import *
>>> foo()
'this'
>>> spam
Traceback (most ... | When you ask for `a.spam` there happens a namespace search in the module `a` and `spam` is found. But when you ask for just `spam`:
```
>>> from a import * # imported foo, spam doesn't exist yet
>>> foo()
```
`spam` is created in the *namespace* a (you cannot access it with such import though), but not in the curren... |
Calculating Precision, Recall and F-score in one pass - python | 33,689,721 | 9 | 2015-11-13T09:40:54Z | 33,689,848 | 8 | 2015-11-13T09:49:09Z | [
"python",
"list",
"machine-learning",
"try-except",
"precision-recall"
] | [Accuracy, precision, recall and f-score](https://en.wikipedia.org/wiki/Precision_and_recall) are measures of a system quality in machine-learning systems. It depends on a confusion matrix of True/False Positives/Negatives.
Given a binary classification task, I have tried the following to get a function that returns a... | > what is the pythonic way to get the counts of the True/False
> Positives/Negatives without multiple loops through the dataset?
I would use a [`collections.Counter`](https://docs.python.org/2/library/collections.html#collections.Counter), roughly what you're doing with all of the `if`s (you should be using `elif`s, a... |
List comprehension with else pass | 33,691,552 | 4 | 2015-11-13T11:18:49Z | 33,691,579 | 8 | 2015-11-13T11:20:53Z | [
"python",
"python-2.7",
"list-comprehension"
] | How do I do the following in a list comprehension?
```
test = [["abc", 1],["bca",2]]
result = []
for x in test:
if x[0] =='abc':
result.append(x)
else:
pass
result
Out[125]: [['abc', 1]]
```
Try 1:
```
[x if (x[0] == 'abc') else pass for x in test]
File "<ipython-input-127-d0bbe1907880>", ... | The `if` needs to be at the end and you don't need the `pass` in the list comprehension. The item will only be added if the `if` condition is met, otherwise the element will be ignored, so the `pass` is implicitly implemented in the list comprehension syntax.
```
[x for x in test if x[0] == 'abc']
```
For completenes... |
Why do we need endianness here? | 33,692,321 | 10 | 2015-11-13T12:01:04Z | 33,692,657 | 7 | 2015-11-13T12:21:46Z | [
"python",
"numpy",
"endianness"
] | I am reading a [source-code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/input_data.py) which downloads the zip-file and reads the data into numpy array. The code suppose to work on macos and linux and here is the snippet that I see:
```
def _read32(bytestream):
dt = nu... | That's because data downloaded is in big endian format as described in source page: <http://yann.lecun.com/exdb/mnist/>
> All the integers in the files are stored in the MSB first (high
> endian) format used by most non-Intel processors. Users of Intel
> processors and other low-endian machines must flip the bytes of ... |
Use attribute and target matrices for TensorFlow Linear Regression Python | 33,698,510 | 15 | 2015-11-13T17:27:34Z | 33,712,950 | 14 | 2015-11-14T20:24:28Z | [
"python",
"matrix",
"machine-learning",
"scikit-learn",
"tensorflow"
] | I'm trying to follow [this tutorial](http://www.tensorflow.org/tutorials/mnist/beginners/index.md).
TensorFlow just came out and I'm really trying to understand it. I'm familiar with *penalized linear regression* like Lasso, Ridge, and ElasticNet and its usage in `scikit-learn`.
For `scikit-learn` Lasso regression, a... | Softmax is an only addition function (in logistic regression for example), it is not a model like
```
model = LassoCV()
model.fit(DF_X,SR_y)
```
Therefore you can't simply give it data with fit method. However, you can simply create your model with the help of TensorFlow functions.
First of all, you have to create a... |
Flip non-zero values along each row of a lower triangular numpy array | 33,700,380 | 8 | 2015-11-13T19:27:37Z | 33,700,533 | 8 | 2015-11-13T19:37:06Z | [
"python",
"arrays",
"numpy",
"reverse"
] | I have a lower triangular array, like B:
```
B = np.array([[1,0,0,0],[.25,.75,0,0], [.1,.2,.7,0],[.2,.3,.4,.1]])
>>> B
array([[ 1. , 0. , 0. , 0. ],
[ 0.25, 0.75, 0. , 0. ],
[ 0.1 , 0.2 , 0.7 , 0. ],
[ 0.2 , 0.3 , 0.4 , 0.1 ]])
```
I want to flip it to look like:
```
array([[... | How about this:
```
# row, column indices of the lower triangle of B
r, c = np.tril_indices_from(B)
# flip the column indices by subtracting them from r, which is equal to the number
# of nonzero elements in each row minus one
B[r, c] = B[r, r - c]
print(repr(B))
# array([[ 1. , 0. , 0. , 0. ],
# [ 0.7... |
Converting large XML file to relational database | 33,703,114 | 11 | 2015-11-13T23:00:37Z | 34,303,681 | 7 | 2015-12-16T03:58:30Z | [
"javascript",
"python",
"xml",
"node.js",
"relational-database"
] | I'm trying to figure out the best way to accomplish the following:
1. Download a large XML (1GB) file on daily basis from a third-party website
2. Convert that XML file to relational database on my server
3. Add functionality to search the database
For the first part, is this something that would need to be done manu... | All steps could certainly be accomplished using `node.js`. There are modules available that will help you with each of these tasks:
1. * [node-cron](https://github.com/ncb000gt/node-cron): lets you easily set up cron tasks in your node program. Another option would be to set up a cron task on your operating system (lo... |
Call different function for each list item | 33,705,296 | 3 | 2015-11-14T05:04:25Z | 33,705,301 | 10 | 2015-11-14T05:05:08Z | [
"python"
] | Let's say I have a list like this:
```
[1, 2, 3, 4]
```
And a list of functions like this:
```
[a, b, c, d]
```
Is there an easy way to get this output? Something like `zip`, but with functions and arguments?
```
[a(1), b(2), c(3), d(4)]
``` | Use `zip()` and a list comprehension to apply each function to their paired argument:
```
arguments = [1, 2, 3, 4]
functions = [a, b, c, d]
results = [func(arg) for func, arg in zip(functions, arguments)]
```
Demo:
```
>>> def a(i): return 'function a: {}'.format(i)
...
>>> def b(i): return 'function b: {}'.format(... |
Alembic: IntegrityError: "column contains null values" when adding non-nullable column | 33,705,697 | 4 | 2015-11-14T06:13:29Z | 33,705,698 | 7 | 2015-11-14T06:13:29Z | [
"python",
"sqlalchemy",
"alembic"
] | I'm adding a column to an existing table. This new column is `nullable=False`.
```
op.add_column('mytable', sa.Column('mycolumn', sa.String(), nullable=False))
```
When I run the migration, it complains:
```
sqlalchemy.exc.IntegrityError: column "mycolumn" contains null values
``` | It is because your existing data have no value on that new column, i.e. `null`. Thus causing said error. When adding a non-nullable column, you must decide what value to give to already-existing data
---
**Alright, existing data should just have "lorem ipsum" for this new column then. But how do I do it? I can't UPDA... |
Using multiple Python engines (32Bit/64bit and 2.7/3.5) | 33,709,391 | 6 | 2015-11-14T14:22:28Z | 33,711,433 | 13 | 2015-11-14T17:49:00Z | [
"python",
"python-2.7",
"python-3.x",
"anaconda",
"spyder"
] | I would like to use Python for scientific applications and after some research decided that I will use Anaconda as it comes bundled with loads of packages and add new modules using `conda install` through the cmd is easy.
I prefer to use the 64 bit version for better RAM use and efficiency but
32bit version is needed ... | Make sure to set the right environmental variables (<https://github.com/conda/conda/issues/1744>)
Create a new environment for 32bit Python 2.7:
```
set CONDA_FORCE_32BIT=1
conda create -n py27_32 python=2.7
```
Activate it:
```
set CONDA_FORCE_32BIT=1
activate py27_32
```
Deactivate it:
```
deactivate py27_32
``... |
Are there rules for naming single-module Python packages? | 33,712,857 | 10 | 2015-11-14T20:14:31Z | 33,810,175 | 7 | 2015-11-19T17:08:51Z | [
"python",
"package",
"python-module",
"pypi"
] | Should the name I give to the lone module in a Python package match the name of the package?
For example if I have a package with a single module with the structure
```
super-duper/
super/
__init.py___
mycode.py
...
```
I can create a package `super-duper` on PyPi which, when installed, w... | The short answer to your question is: yes, it's generally a good practice to have the name of your module match the name of the package for single module packages (which should be most published packages.)
The slightly longer answer is that naming conventions are always political. The generally accepted method for def... |
Unable to install python pip on Ubuntu 14.04 | 33,717,197 | 2 | 2015-11-15T06:49:21Z | 34,670,459 | 11 | 2016-01-08T06:00:58Z | [
"python",
"ubuntu",
"pip"
] | This is the command I used to install python-pip
```
sudo apt-get install python-pip
```
I get the following error
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or... | got the same error when I install python-pip, the following command solved my problem.
```
sudo apt-get install python-pkg-resources=3.3-1ubuntu1
sudo apt-get install python-setuptools
``` |
Why is this TensorFlow implementation vastly less successful than Matlab's NN? | 33,720,645 | 20 | 2015-11-15T14:12:06Z | 33,723,404 | 18 | 2015-11-15T18:34:47Z | [
"python",
"matlab",
"neural-network",
"tensorflow"
] | As a toy example I'm trying to fit a function `f(x) = 1/x` from 100 no-noise data points. The matlab default implementation is phenomenally successful with mean square difference ~10^-10, and interpolates perfectly.
I implement a neural network with one hidden layer of 10 sigmoid neurons. I'm a beginner at neural netw... | I tried training for 50000 iterations it got to 0.00012 error. It takes about 180 seconds on Tesla K40.
[](http://i.stack.imgur.com/cH2hN.png)
It seems that for this kind of problem, first order gradient descent is not a good fit (pun intended), and y... |
Why is this TensorFlow implementation vastly less successful than Matlab's NN? | 33,720,645 | 20 | 2015-11-15T14:12:06Z | 33,728,310 | 12 | 2015-11-16T04:00:59Z | [
"python",
"matlab",
"neural-network",
"tensorflow"
] | As a toy example I'm trying to fit a function `f(x) = 1/x` from 100 no-noise data points. The matlab default implementation is phenomenally successful with mean square difference ~10^-10, and interpolates perfectly.
I implement a neural network with one hidden layer of 10 sigmoid neurons. I'm a beginner at neural netw... | btw, here's a slightly cleaned up version of the above that cleans up some of the shape issues and unnecessary bouncing between tf and np. It achieves 3e-08 after 40k steps, or about 1.5e-5 after 4000:
```
import tensorflow as tf
import numpy as np
def weight_variable(shape):
initial = tf.truncated_normal(shape, st... |
How should I shorten this line of Python code? | 33,723,612 | 2 | 2015-11-15T18:55:56Z | 33,723,656 | 7 | 2015-11-15T18:59:34Z | [
"python",
"python-2.7",
"pep8"
] | Here is the line that needs to be shortened.
```
tree_top = os.path.abspath(os.path.expanduser(os.path.expandvars(sys.argv[1])))
```
* Should I create a variable for each procedure?
* Should I alias `os.path,abspath`, `os.path.expandvars` and `os.path.expanduser` to have shorter names?
* Should I use backslashes? | The easiest way to reduce the width is to use implicit line continuation within parentheses:
```
tree_top = os.path.abspath(
os.path.expanduser(
os.path.expandvars(sys.argv[1])
)
)
```
Alternatively, just select the parts of `os.path` that you need:
```
from os.path import abspath, expanduser, expand... |
How can numpy be so much faster than my Fortran routine? | 33,723,771 | 73 | 2015-11-15T19:10:57Z | 33,724,424 | 105 | 2015-11-15T20:07:33Z | [
"python",
"arrays",
"performance",
"numpy",
"fortran"
] | I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up ... | Your Fortran implementation suffers two major shortcomings:
* You mix IO and computations (and read from the file entry by entry).
* You don't use vector/matrix operations.
This implementation does perform the same operation as yours and is faster by a factor of 20 on my machine:
```
program test
integer gridsize,... |
How can numpy be so much faster than my Fortran routine? | 33,723,771 | 73 | 2015-11-15T19:10:57Z | 33,724,538 | 53 | 2015-11-15T20:18:31Z | [
"python",
"arrays",
"performance",
"numpy",
"fortran"
] | I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up ... | The numpy is faster because you wrote much more efficient code in python (and much of the numpy backend is written in optimized Fortran and C) and terribly inefficient code in Fortran.
Look at your python code. You load the entire array at once and then call functions that can operate on an array.
Look at your fortra... |
Pip hangs in Windows 7 | 33,724,228 | 7 | 2015-11-15T19:50:39Z | 34,800,120 | 7 | 2016-01-14T21:30:46Z | [
"python",
"windows",
"python-2.7",
"python-3.x",
"pip"
] | I have `Python 2.7.10` installed with pip om Windows 7. When I'm trying to install package or even just run `pip` in cmd with no options, it stacks, prints nothing, and even ctrl+c does not work, I have to close cmd.
Task Manager shows 3 running `pip.exe *32` processes, and when I close cmd I can kill one of them. Ot... | I had exactly the same problem. The reason - in my case - was my antivirus program Avast. It blocked pip. As soon as I inactivated it. It works. I need to find a way now to explain Avast to stop blocking pip. |
Counting the number of unique words in a list | 33,726,361 | 2 | 2015-11-15T23:26:14Z | 33,726,420 | 8 | 2015-11-15T23:33:06Z | [
"python",
"python-3.x"
] | Using the following code from <http://stackoverflow.com/a/11899925>, I am able to find if a word is unique or not (by comparing if it was used once or greater than once):
```
helloString = ['hello', 'world', 'world']
count = {}
for word in helloString :
if word in count :
count[word] += 1
else:
count... | The best way to solve this is to use the `set` collection type. A `set` is a collection in which all elements are unique. Therefore:
```
unique = set([ 'one', 'two', 'two'])
len(unique) # is 2
```
You can use a set from the outset, adding words to it as you go:
```
unique.add('three')
```
This will throw out any d... |
In slicing, why can't I reverse a list, skipping the last item in a single bracket? | 33,727,311 | 4 | 2015-11-16T01:40:15Z | 33,727,329 | 9 | 2015-11-16T01:42:08Z | [
"python"
] | In Python, I can set the `end` that I want in a slice:
```
l = [0, 1, 2, 3, 4, 5]
l[:-1] = [0, 1, 2, 3, 4]
```
I can also set the step I want:
```
l[::-1] = [5, 4, 3, 2, 1, 0]
```
So, how come I cannot reverse the list that skips the last item in a single take? I mean why this happens:
```
l[:-1:-1] = []
```
To g... | You can, but you have to do it like this:
```
>>> x[-2::-1]
[4, 3, 2, 1, 0]
```
The reason is that when you use a negative slice, the "start" of the slice is towards the end of the list, and the "end" of the slice is towards the beginning of the list. In other words, if you want to take a backwards slice and leave of... |
Python thinks my tuple is an integer | 33,735,091 | 2 | 2015-11-16T12:12:39Z | 33,735,141 | 8 | 2015-11-16T12:14:54Z | [
"python",
"indexing",
"integer",
"int",
"tuples"
] | I am trying to print out the positions of a given substring inside of a string, but on line 18 I keep getting the error
`Traceback (most recent call last):
File "prog.py", line 18, in <module>
TypeError: 'int' object has no attribute '__getitem__'`
I have no idea why this is happening, because I am new to python. But... | `tracked = (p)` is an integer, not a tuple. The brackets don't necessarily create a tuple, because they're also used for operator precedence in expressions. In this case, it's just evaluating it as an expression so `(p)` gets evaluated to `p`. If you wanted to make it a tuple, you'd need to add a comma `(p,)` which mak... |
How to compact this for? | 33,738,355 | 3 | 2015-11-16T15:00:39Z | 33,738,408 | 7 | 2015-11-16T15:03:18Z | [
"python",
"dictionary"
] | I have a dictionary of dictionaries. I want to count how many of those dictionaries has element "status" set to "connecting".
This is my working code:
```
connecting = 0
for x in self.servers:
if self.servers[x]["status"] == "connecting": connecting += 1
```
Is there any way of compacting this? I was thinking so... | You can use a generator expression within `sum` function :
```
sum(x["status"]=="connecting" for x in self.servers.values())
```
Note that since the result of `x["status"]=="connecting"` is a boolean value and if it be True python will evaluated it as 1, so in the end it will returns the number of dictionaries that f... |
sampling multinomial from small log probability vectors in numpy/scipy | 33,738,382 | 14 | 2015-11-16T15:01:51Z | 33,819,405 | 8 | 2015-11-20T05:08:56Z | [
"python",
"numpy",
"scipy",
"probability",
"precision"
] | Is there a function in numpy/scipy that lets you sample multinomial from a vector of small log probabilities, without losing precision? example:
```
# sample element randomly from these log probabilities
l = [-900, -1680]
```
the naive method fails because of underflow:
```
import scipy
import numpy as np
# this mak... | First of all, I believe the problem you're encountering is because you're normalizing your probabilities incorrectly. This line is incorrect:
```
a = np.exp(l) / scipy.misc.logsumexp(l)
```
You're dividing a probability by a log probability, which makes no sense. Instead you probably want
```
a = np.exp(l - scipy.mi... |
How do I know if I can disable SQLALCHEMY_TRACK_MODIFICATIONS? | 33,738,467 | 29 | 2015-11-16T15:05:39Z | 33,790,196 | 31 | 2015-11-18T20:56:34Z | [
"python",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] | Every time I run my app that uses Flask-SQLAlchemy I get the following warning that the `SQLALCHEMY_TRACK_MODIFICATIONS` option will be disabled.
```
/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant over... | Most likely your application doesn't use the Flask-SQLAlchemy event system, so you're probably safe to turn off. You'll need to audit the code to verify--you're looking for anything that hooks into [`models_committed` or `before_models_committed`](http://flask-sqlalchemy.pocoo.org/dev/signals/). If you do find that you... |
What does from __future__ import absolute_import actually do? | 33,743,880 | 35 | 2015-11-16T20:18:11Z | 33,744,115 | 13 | 2015-11-16T20:35:40Z | [
"python",
"python-2.7",
"python-import",
"python-2.5"
] | I have [answered](http://stackoverflow.com/a/22679558/2588818) a question regarding absolute imports in Python, which I thought I understood based on reading [the Python 2.5 changelog](https://docs.python.org/2.5/whatsnew/pep-328.html) and accompanying [PEP](https://www.python.org/dev/peps/pep-0328/). However, upon ins... | The changelog is sloppily worded. `from __future__ import absolute_import` does not care about whether something is part of the standard library, and `import string` will not always give you the standard-library module with absolute imports on.
`from __future__ import absolute_import` means that if you `import string`... |
Spark union of multiple RDDs | 33,743,978 | 8 | 2015-11-16T20:25:01Z | 33,744,540 | 15 | 2015-11-16T21:00:31Z | [
"python",
"apache-spark",
"pyspark"
] | In my pig code I do this:
```
all_combined = Union relation1, relation2,
relation3, relation4, relation5, relation 6.
```
I want to do the same with spark. However, unfortunately, I see that I have to keep doing it pairwise:
```
first = rdd1.union(rdd2)
second = first.union(rdd3)
third = second.union(rdd4)
# ..... | If these are RDDs you can use `SparkContext.union` method:
```
rdd1 = sc.parallelize([1, 2, 3])
rdd2 = sc.parallelize([4, 5, 6])
rdd3 = sc.parallelize([7, 8, 9])
rdd = sc.union([rdd1, rdd2, rdd3])
rdd.collect()
## [1, 2, 3, 4, 5, 6, 7, 8, 9]
```
There is no `DataFrame` equivalent but it is just a matter of a simple... |
Problems implementing an XOR gate with Neural Nets in Tensorflow | 33,747,596 | 7 | 2015-11-17T01:22:50Z | 33,750,395 | 12 | 2015-11-17T06:21:18Z | [
"python",
"neural-network",
"tensorflow"
] | I want to make a trivial neural network, it should just implement the XOR gate. I am using the TensorFlow library, in python.
For an XOR gate, the only data I train with, is the complete truth table, that should be enough right? Over optimization is what I will expect to happen very quickly. Problem with the code is th... | There are a few issues with your program.
The first issue is that the function you're learning isn't XOR - it's NOR. The lines:
```
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array([[1], [0], [0], [0]])
```
...should be:
```
xTrain = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
yTrain = np.array(... |
Sparksql filtering (selecting with where clause) with multiple conditions | 33,747,834 | 2 | 2015-11-17T01:51:20Z | 33,754,039 | 7 | 2015-11-17T10:03:01Z | [
"python",
"sql",
"apache-spark",
"apache-spark-sql",
"pyspark"
] | Hi I have the following issue:
```
numeric.registerTempTable("numeric").
```
All the values that I want to filter on are literal null strings and not N/A or Null values.
I tried these three options:
1. `numeric_filtered = numeric.filter(numeric['LOW'] != 'null').filter(numeric['HIGH'] != 'null').filter(numeric['NOR... | Your are using logical conjunction (AND). It means that all columns have to be different than `'null'` for row to be included. Lets illustrate that using `filter` version as an example:
```
numeric = sqlContext.createDataFrame([
('3.5,', '5.0', 'null'), ('2.0', '14.0', 'null'), ('null', '38.0', 'null'),
('nul... |
output of numpy.where(condition) is not an array, but a tuple of arrays: why? | 33,747,908 | 10 | 2015-11-17T01:59:31Z | 33,748,216 | 8 | 2015-11-17T02:38:59Z | [
"python",
"arrays",
"numpy"
] | I am experimenting with the `numpy.where(condition[, x, y])` function.
From the [numpy documentation,](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html) I learn that if you give just one array as input, it should return the indices where the array is non-zero (i.e. "True"):
> If only condi... | In Python `(1)` means just `1`. `()` can be freely added to group numbers and expressions for human readability (e.g. `(1+3)*3` v `(1+3,)*3`). Thus to denote a 1 element tuple it uses `(1,)` (and requires you to use it as well).
Thus
```
(array([4, 5, 6, 7, 8]),)
```
is a one element tuple, that element being an arr... |
Tensorflow: How to replace a node in a calculation graph? | 33,748,552 | 5 | 2015-11-17T03:17:24Z | 33,770,771 | 9 | 2015-11-18T02:24:34Z | [
"python",
"tensorflow"
] | If you have two disjoint graphs, and want to link them, turning this:
```
x = tf.placeholder('float')
y = f(x)
y = tf.placeholder('float')
z = f(y)
```
into this:
```
x = tf.placeholder('float')
y = f(x)
z = g(y)
```
Is there a way to do that? It seems like it could make construction easier in some cases.
For exa... | TL;DR: If you can define the two computations as Python functions, you should do that. If you can't, there's more advanced functionality in TensorFlow to serialize and import graphs, which allows you to compose graphs from different sources.
One way to do this in TensorFlow is to build the disjoint computations as sep... |
Count occurrence of tuples with Python | 33,749,573 | 2 | 2015-11-17T05:07:43Z | 33,749,614 | 8 | 2015-11-17T05:11:12Z | [
"python",
"python-2.7",
"tuples"
] | I'm trying to convert a list of Python tuples that display product and cost to a list of tuples that display the cost and the count of products at a given cost.
For example, given the below input:
```
[('Product1', 9), ('Product2', 1),
('Product3', 1), ('Product4', 2),
('Product5', 3), ('Product6', 4),
('Product7'... | Maybe [`collections.Counter`](http://docs.python.org/2/library/collections.html#collections.Counter) could solve your problem:
```
>>> from collections import Counter
>>> c = Counter(elem[1] for elem in given_list)
```
Output will look like this:
```
Counter({1: 3, 3: 3, 2: 2, 4: 2, 5: 2, 6: 2, 7: 2, 8: 1, 9: 1})
``... |
How to install xgboost package in python (windows platform)? | 33,749,735 | 13 | 2015-11-17T05:22:41Z | 35,119,904 | 20 | 2016-01-31T21:46:54Z | [
"python",
"python-2.7",
"installation",
"machine-learning",
"xgboost"
] | <http://xgboost.readthedocs.org/en/latest/python/python_intro.html>
On the homepage of xgboost(above link), it says:
To install XGBoost, do the following steps:
1. You need to run `make` in the root directory of the project
2. In the python-package directory run
python setup.py install
However, when I did it, fo... | Note that as of the most recent release the Microsoft Visual Studio instructions no longer seem to apply as this link returns a 404 error:
<https://github.com/dmlc/xgboost/tree/master/windows>
You can read more about the removal of the MSVC build from Tianqi Chen's comment [here](https://github.com/dmlc/xgboost/issue... |
Calculate logarithm in python | 33,754,670 | 2 | 2015-11-17T10:32:06Z | 33,754,732 | 10 | 2015-11-17T10:35:06Z | [
"python"
] | I am wondering why the result of `log base 10 (1.5)` in python = 0.405465108108 while the real answer = 0.176091259.
This is the code that I wrote:
```
import math
print math.log(1.5)
```
Can someone know how to solve this issue | From [the documentation](https://docs.python.org/2/library/math.html#math.log):
> With one argument, return the natural logarithm of *x* (to base *e*).
>
> With two arguments, return the logarithm of *x* to the given *base*, calculated as `log(x)/log(base)`.
But the log10 is made available as `math.log10()`, which do... |
For loop syntax in Python without using range() or xrange() | 33,759,539 | 3 | 2015-11-17T14:34:14Z | 33,759,625 | 7 | 2015-11-17T14:37:33Z | [
"python",
"syntax"
] | I do not know much about python so i apologize if my question is a very basic one.
Let's say i have a list
```
lst = [1,2,3,4,5,6,7,8,9,10]
```
Now what i want to know is that if there is any way to write the following piece of code in python without using `range()` or `xrange()`:
```
for i in lst:
for j in lst... | It looks like you might want to use [enumerate()](https://docs.python.org/3/library/functions.html#enumerate):
```
for index, item in enumerate(lst):
for j in lst[index+1:]:
#Do Something
``` |
For loop syntax in Python without using range() or xrange() | 33,759,539 | 3 | 2015-11-17T14:34:14Z | 33,759,639 | 8 | 2015-11-17T14:38:17Z | [
"python",
"syntax"
] | I do not know much about python so i apologize if my question is a very basic one.
Let's say i have a list
```
lst = [1,2,3,4,5,6,7,8,9,10]
```
Now what i want to know is that if there is any way to write the following piece of code in python without using `range()` or `xrange()`:
```
for i in lst:
for j in lst... | This is what list slicing is about, you can take part of your list from i'th element through
```
lst[i:]
```
furthermore, in order to have both index and value you need `enumerate` operation, which changes the list into list of pairs `(index, value)`
thus
```
for ind, i in enumerate(lst):
for j in lst[ind+1: ]:... |
Tensorflow: How to restore a previously saved model (python) | 33,759,623 | 48 | 2015-11-17T14:37:26Z | 33,762,168 | 66 | 2015-11-17T16:30:03Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow"
] | I want to make a prediction with a model already saved by tensorflow, so I need to restore the model first.
The code I have fails because I can't call Saver() without the variables of the model, but that is exactly what I want to load! Do I need to create a dummy model with the same variables first and then restore my... | The checkpoints that are saved contain values for the `Variable`s in your model, not the model/graph itself, which means that the graph should be the same when you restore the checkpoint.
Here's an example for a linear regression where there's a training loop that saves variable checkpoints and an evaluation section t... |
Tensorflow: How to restore a previously saved model (python) | 33,759,623 | 48 | 2015-11-17T14:37:26Z | 33,763,208 | 21 | 2015-11-17T17:22:19Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow"
] | I want to make a prediction with a model already saved by tensorflow, so I need to restore the model first.
The code I have fails because I can't call Saver() without the variables of the model, but that is exactly what I want to load! Do I need to create a dummy model with the same variables first and then restore my... | There are two parts to the model, the model definition, saved by `Supervisor` as `graph.pbtxt` in the model directory and the numerical values of tensors, saved into checkpoint files like `model.ckpt-1003418`.
The model definition can be restored using `tf.import_graph_def`, and the weights are restored using `Saver`.... |
Cannot run Google App Engine custom managed VM: --custom-entrypoint must be set error | 33,764,630 | 4 | 2015-11-17T18:42:41Z | 33,814,096 | 7 | 2015-11-19T20:48:52Z | [
"python",
"google-app-engine",
"google-app-engine-python",
"gae-module",
"google-managed-vm"
] | **PROBLEM DESCRIPTION**
I am trying to create a custom managed VM for Google App Engine that behaves identically to the standard python27 managed VM provided by Google. (I'm doing this as a first step to adding a C++ library to the runtime).
From google [documentation](https://cloud.google.com/appengine/docs/managed-... | **UPDATE**
THIS MAY NO LONGER BE ACCURATE. SEE NICK'S ANSWER.
(Though I could not get that working. But I did not try very hard)
---
There is a completely undocumented but absolutely essential piece of information w.r.t. custom managed VMs:
**THEY CANNOT BE RUN ON THE DEVELOPMENT SERVER!**
If you think this cruci... |
Finding partial subsets python | 33,768,108 | 2 | 2015-11-17T22:16:58Z | 33,768,143 | 9 | 2015-11-17T22:19:18Z | [
"python",
"set"
] | I'm looking for a way to get the number of elements of one `set` that appear in another `set`.
Given these two sets:
```
a = 'a b c d'
b = 'a b c e f'
a = set(a.split())
b = set(b.split())
```
This prints false:
```
print a.issubset(b) # prints False
```
Is there a pythonic way to instead print "3" since three ele... | IIUC, you can use [`set.intersection`](https://docs.python.org/2/library/sets.html#set-objects):
```
>>> a.issubset(b)
False
>>> a.intersection(b)
{'a', 'c', 'b'}
>>> len(a.intersection(b))
3
```
which could be abbreviated `&` since both `a` and `b` are sets:
```
>>> len(a & b)
3
``` |
Generate and execute R, Python, etc.., script from within bash script | 33,769,018 | 4 | 2015-11-17T23:26:00Z | 33,769,363 | 8 | 2015-11-17T23:54:31Z | [
"python",
"bash"
] | I have been trying to find a solution for this for a while but haven't found anything satisfactory yet. I write a lot of bash scripts, but sometimes I want to use R or Python as part of the script. Right now, I end up having to write two scripts; the original bash script to perform the first half of the task, and the R... | There are probably lots of solutions, but this one works:
```
#!/bin/bash
## do stuff
R --slave <<EOF
## R code
set.seed(101)
rnorm($1)
EOF
```
If you want the flexibility to pass additional bash arguments to R, I suggest:
```
#!/bin/bash
## do stuff
R --slave --args $@ <<EOF
## R code
set.seed(101)
args... |
Libxml2 installation onto Mac | 33,770,087 | 3 | 2015-11-18T01:11:15Z | 33,770,588 | 7 | 2015-11-18T02:05:39Z | [
"python",
"osx",
"libxml2"
] | I'm trying to install "libxml2" and "libxslt" in order to use scrapy (web scraping with python) on a mac.
I have homebrew and I ran
`$ brew install libxml2 libxslt`
I get this message
`OS X already provides this software and installing another version in parallel can cause all kinds of trouble.`
When I try to instal... | You should run `$ xcode-select --install`. This will install the XCode command line tools which include libxml2. |
How to annotate Count with a condition in a Django queryset | 33,775,011 | 10 | 2015-11-18T08:28:41Z | 33,777,815 | 18 | 2015-11-18T10:39:15Z | [
"python",
"django",
"django-queryset"
] | Using Django ORM, can one do something like `queryset.objects.annotate(Count('queryset_objects', gte=VALUE))`. Catch my drift?
---
Here's a quick example to use for illustrating a possible answer:
In a Django website, content creators submit articles, and regular users view (i.e. read) the said articles. Articles ca... | ### For django >= 1.8
Use [Conditional Aggregation](https://docs.djangoproject.com/en/1.8/ref/models/conditional-expressions/#conditional-aggregation):
```
from django.db.models import Count, Case, When, IntegerField
Article.objects.annotate(
numviews=Count(Case(
When(readership__what_time__lt=treshold, t... |
Building custom Caffe layer in python | 33,778,225 | 9 | 2015-11-18T10:57:44Z | 33,797,142 | 8 | 2015-11-19T06:56:26Z | [
"python",
"deep-learning",
"caffe",
"pycaffe"
] | After parsing many links regarding building Caffe layers in Python i still have difficulties in understanding few concepts. Can please someone clarify them?
* Blobs and weights python structure for network is explained here: [Finding gradient of a Caffe conv-filter with regards to input](http://stackoverflow.com/quest... | You asked a lot of questions here, I'll give you some highlights and pointers that I hope will clarify matters for you. I will not explicitly answer all your questions.
It seems like you are most confused about the the difference between a blob and a layer's input/output. Indeed most of the layers has a *single* blob ... |
Spark Dataframe distinguish columns with duplicated name | 33,778,664 | 5 | 2015-11-18T11:16:51Z | 33,779,190 | 11 | 2015-11-18T11:44:47Z | [
"python",
"apache-spark",
"apache-spark-sql",
"pyspark",
"spark-dataframe"
] | So as I know in Spark Dataframe, that for multiple columns can have the same name as shown in below dataframe snapshot:
```
[
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.... | Lets start with some data:
```
from pyspark.mllib.linalg import SparseVector
from pyspark.sql import Row
df1 = sqlContext.createDataFrame([
Row(a=107831, f=SparseVector(
5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=125231, f=SparseVector(
5, {0: 0.0, 1: 0.0, 2: 0.0047, 3: 0.0, 4: 0.004... |
Is there a way to gain access to the class of a method when all you have is a callable | 33,782,461 | 9 | 2015-11-18T14:19:58Z | 33,782,524 | 14 | 2015-11-18T14:23:16Z | [
"python",
"methods",
"metaprogramming"
] | I have code that is like:
```
class Foo:
def foo(self):
pass
class Bar:
def foo(self):
pass
f = random.choice((Foo().foo, Bar().foo))
```
How do I access `Bar` or `Foo` from f?
`f.__dict__` is of little to no help, but as `repr(f)` gives `<bound method Bar.foo of <__main__.Bar object at 0x... | Each bound method has the `__self__` attribute which is the
> instance to which this method is bound, or `None`
(copied from [here](https://docs.python.org/3/library/inspect.html#types-and-members))
More about bound methods (from [*Data Model*](https://docs.python.org/3/reference/datamodel.html)):
> If you access a... |
How can I visualize the weights(variables) in cnn in Tensorflow? | 33,783,672 | 17 | 2015-11-18T15:13:38Z | 33,794,463 | 12 | 2015-11-19T02:55:47Z | [
"python",
"tensorflow"
] | After training the cnn model, I want to visualize the weight or print out the weights, what can I do?
I cannot even print out the variables after training.
Thank you! | To visualize the weights, you can use a [`tf.image_summary()`](http://tensorflow.org/api_docs/python/train.html#image_summary) op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a [`tf.train.SummaryWriter`](http://tensorflow.org/api_docs/python/train.html#Sum... |
How can I visualize the weights(variables) in cnn in Tensorflow? | 33,783,672 | 17 | 2015-11-18T15:13:38Z | 35,858,862 | 9 | 2016-03-08T04:17:57Z | [
"python",
"tensorflow"
] | After training the cnn model, I want to visualize the weight or print out the weights, what can I do?
I cannot even print out the variables after training.
Thank you! | Like @mrry said, you can use `tf.image_summary`. For example, for `cifar10_train.py`, you can put this code somewhere under `def train()`. Note how you access a var under scope 'conv1'
```
# Visualize conv1 features
with tf.variable_scope('conv1') as scope_conv:
weights = tf.get_variable('weights')
# scale weight... |
How is `min` of two integers just as fast as 'bit hacking'? | 33,784,519 | 38 | 2015-11-18T15:50:36Z | 33,784,710 | 33 | 2015-11-18T16:00:20Z | [
"python"
] | I was watching a [lecture series](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-2-bit-hacks/) on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers:
```
return x ^ (... | This is likely due to how the `min` function is implemented in python.
Many python builtins are actually implemented in low level languages such as C or assembly and use the python apis in order to be callable in python.
Your bit fiddling technique is likely very fast in C but in python the interpretation overhead of... |
How is `min` of two integers just as fast as 'bit hacking'? | 33,784,519 | 38 | 2015-11-18T15:50:36Z | 33,785,191 | 22 | 2015-11-18T16:22:54Z | [
"python"
] | I was watching a [lecture series](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-2-bit-hacks/) on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers:
```
return x ^ (... | Well, the bit hacking trick might have been faster in the 90s, but it is slower on current machines by a factor of two. Compare for yourself:
```
// gcc -Wall -Wextra -std=c11 ./min.c -D_POSIX_SOURCE -Os
// ./a.out 42
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define COUNT (1 << 28)
static int array[... |
How is `min` of two integers just as fast as 'bit hacking'? | 33,784,519 | 38 | 2015-11-18T15:50:36Z | 33,792,518 | 14 | 2015-11-18T23:28:39Z | [
"python"
] | I was watching a [lecture series](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-2-bit-hacks/) on 'Bit Hacking' and came across the following optimization for finding the minimum of two integers:
```
return x ^ (... | Lets do a slightly deeper dive here to find out the real reason behind this weirdness (if any).
Lets create 3 methods and look at their python bytecode and runtimes...
```
import dis
def func1(x, y):
return min(x, y)
def func2(x, y):
if x < y:
return x
return y
def func3(x, y):
return x ^ (... |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 33,785,756 | 37 | 2015-11-18T16:49:22Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
********************... | Install lxml from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> for your python version. It's a precompiled WHL with required modules/dependencies.
The site lists several packages, when e.g. using Win32 Python 2.7, use `lxml-3.6.1-cp27-cp27m-win32.whl`.
Just install with `pip install lxml-3.6.1-cp27-cp27m-win32.w... |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 34,433,713 | 26 | 2015-12-23T10:35:17Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
********************... | Try to use:
`easy_install lxml`
That works for me, win10, python 2.7. |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 34,816,278 | 20 | 2016-01-15T17:09:32Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
********************... | On Mac OS X El Capitan I had to run these two commands to fix this error:
```
xcode-select --install
pip install lxml
```
Which ended up installing lxml-3.5.0
When you run the xcode-select command you may have to sign a EULA (so have an X-Term handy for the UI if you're doing this on a headless machine). |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 35,872,362 | 60 | 2016-03-08T16:12:13Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
********************... | I had this issue and realised that whilst I did have libxml2 installed, I didn't have the necessary development libraries required by the python package. Installing them solved the problem:
```
sudo apt-get install libxml2-dev libxslt1-dev
sudo pip install lxml
``` |
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip | 33,785,755 | 42 | 2015-11-18T16:49:22Z | 37,462,166 | 7 | 2016-05-26T13:24:21Z | [
"python"
] | I'm getting an error `Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?` when trying to install lxml through pip.
```
c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
********************... | In case anyone else has the same issue as this on
> Centos, try:
```
yum install python-lxml
```
> Ubuntu
```
sudo apt-get install -y python-lxml
```
worked for me. |
TensorFlow Error found in Tutorial | 33,785,936 | 11 | 2015-11-18T16:57:38Z | 33,786,141 | 14 | 2015-11-18T17:09:07Z | [
"python",
"tensorflow"
] | Dare I even ask? This is such a new technology at this point that I can't find a way to solve this seemingly simple error. The tutorial I'm going over can be found here- <http://www.tensorflow.org/tutorials/mnist/pros/index.html#deep-mnist-for-experts>
I literally copied and pasted all of the code into IPython Noteboo... | I figured it out. As you see in the value error, it says `No default session is registered. Use 'with DefaultSession(sess)' or pass an explicit session to eval(session=sess)` so the answer I came up with is to pass an explicit session to eval, just like it says. Here is where I made the changes.
```
if i%100 == 0:
... |
Tensorflow: Using Adam optimizer | 33,788,989 | 22 | 2015-11-18T19:45:42Z | 33,846,659 | 27 | 2015-11-21T17:53:30Z | [
"python",
"tensorflow"
] | I am experimenting with some simple models in tensorflow, including one that looks very similar to the first [MNIST for ML Beginners example](http://www.tensorflow.org/tutorials/mnist/beginners/index.md), but with a somewhat larger dimensionality. I am able to use the gradient descent optimizer with no problems, gettin... | The AdamOptimizer class creates additional variables, called "slots", to hold values for the "m" and "v" accumulators.
See the source here if you're curious, it's actually quite readable:
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/adam.py#L39> . Other optimizers, such as Momentum ... |
python numpy operation instead of for loops | 33,792,040 | 5 | 2015-11-18T22:53:42Z | 33,792,971 | 10 | 2015-11-19T00:08:11Z | [
"python",
"numpy",
"vectorization"
] | I wrote some lines in python which work fine but is very slow; I think due to the for loops. I hope one can speed up the following operations using numpy commands. Let me define the goal.
Let's assume I have (always) a 3d numpy array (first dim: row, second dim: col, third dim: frame); for simplecity I take a 2d array... | It's quite easy to vectorize what you're doing:
```
import numpy as np
#generate dummy data
nrows=6
ncols=11
nframes=3
threshold=0.3
data=np.random.rand(nrows,ncols,nframes)
CM_tilde = np.mean(data, axis=1)
N = data.shape[1]
all_CMs2 = np.mean(np.where(data < (CM_tilde[:,None,:]+threshold),data,CM_tilde[:,None,:]),... |
Matching/Counting lists in python dictionary | 33,792,339 | 4 | 2015-11-18T23:14:07Z | 33,792,487 | 7 | 2015-11-18T23:25:39Z | [
"python",
"dictionary"
] | I have a dictionary `{x: [a,b,c,d], y: [a,c,g,f,h],...}`. So the key is one variable with the value being a list (of different sizes).
My goal is to match up each list against every list in the dictionary and come back with a count of how many times a certain list has been repeated.
I tried this but does not seem to ... | You could map the lists to tuples so they can be used as keys and use a `Counter` dict to do the counting:
```
from collections import Counter
count = Counter(map(tuple, d.values()))
``` |
Why PyGame coordinate system has its origin at the top-left corner? | 33,805,481 | 9 | 2015-11-19T13:37:13Z | 33,806,018 | 8 | 2015-11-19T14:00:41Z | [
"python",
"graphics",
"pygame"
] | I've been using PyGame for a while and I have had to make coordinate transformations to change the normal coordinate system used in mathematics (with its origin at the bottom-left corner) to the PyGame coordinate system (with its origin at the top-left corner). I found [this post](http://stackoverflow.com/questions/101... | It's not just PyGame - it's an old convention for graphics displays. Many APIs allow you to override and choose your own convention, but even then, they are mapping back to that top-left-corner convention in the background.
The origin of the convention is easy to see for old CRT displays. The raster scan for each fram... |
Why does g.append(l.pop()) return the second half of l but l only has the first half | 33,806,742 | 2 | 2015-11-19T14:32:39Z | 33,806,814 | 11 | 2015-11-19T14:35:53Z | [
"python",
"list",
"python-2.7"
] | While I was working over one program, I got to see one strange behavior in my code.Here is what I saw.
```
>>> l = [1,2,3,4,5,6,7,8]
>>> g = []
>>> for i in l:
... g.append(l.pop())
...
>>> l
[1, 2, 3, 4]
>>> g
[8, 7, 6, 5]
>>>
```
The list `g` was supposed to have all the elements of list `l` here! But why did ... | **YOU SHOULD NORMALLY NOT DO THIS!**
Changing the Iterable you are looping over is not good!
**Explanation:**
As you can see `l.pop()` always takes the last item of `l`
`g.append()` now adds the popped item to the end of `g`
After 4 runs `l` doesnt have any items left.
First Run:
```
i = v
l = [1,2,3,4,5,6,... |
Lists are the same but not considered equal? | 33,808,011 | 5 | 2015-11-19T15:29:04Z | 33,808,055 | 9 | 2015-11-19T15:30:50Z | [
"python",
"list",
"python-2.7",
"equality"
] | novice at Python encountering a problem testing for equality. I have a list of lists, states[]; each state contains x, in this specific case x=3, Boolean values. In my program, I generate a list of Boolean values, the first three of which correspond to a state[i]. I loop through the list of states testing for equality ... | You are comparing a **tuple** `(True, True, True)` against a **list** `[True, True, True]`
Of course they're different.
**Try casting your `list` to `tuple` on-the-go, to compare:**
```
temp1 = []
for boolean in aggregate:
temp1.append(boolean)
if len(temp1) == len(propositions):
break
print temp1
print state... |
Convert string ">0" to python > 0 | 33,808,854 | 2 | 2015-11-19T16:07:18Z | 33,808,961 | 8 | 2015-11-19T16:11:55Z | [
"python"
] | I read the conditions ">0", "<60", etc., from a xml file. What is the best way to convert them to python language to compare? The sample code is what I want to do:
```
if str == ">0":
if x > 0:
print "yes"
else:
print "no"
elif str == "<60":
if x < 60:
... | I would use [regex](https://docs.python.org/2/library/re.html) and [`operator`](https://docs.python.org/2/library/operator.html).
```
from operator import lt, gt
import re
operators = {
">": gt,
"<": lt,
}
string = ">60"
x = 3
op, n = re.findall(r'([><])(\d+)', string)[0]
print(operators[op](x, int(n)))
``... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.