Skip to content →

Tag: Data Analysis

Exporting and Importing Elasticsearch Indicies

In my project I need to run some local tests with data from a production elasticsearch cluster, so I exported data from the production server and imported to my local cluster. This can also be used when backing up and restoring data. Here’re the instructions.

Before you start, check out the official documentation: Snapshot and Restore.

Backing up/exporting data:

  1. Modify your eleasticsearch configuration file (normally elasticsearch.yml) and add a path.repo line, for example:
    path.repo: /usr/local/var/backups/
  2. Make sure this path has the correct permissions so that elasticsearch can read and write.
  3. Create snapshot:
    curl -XPUT http://localhost:9200/_snapshot/my_backup -d '{"type": "fs", "settings": {"compress": "true", "location": "/usr/local/var/backups/"}}}'
    curl -XPUT http://localhost:9200/_snapshot/my_backup/snapshot_1?wait_forcompletion=true
  4. Copy the files in the configured location to your local machine.

Restoring/importing data:

  1. Modify your local elasticsearch configuration similarly like step 1 when backing up.
  2. Place the snapshot files to the repo path.
  3. Close your indices:
    curl -XPOST http://localhost:9200/knx-bus/_close
  4. Import data:
    curl -XPOST http://localhost:9200/_snapshot/my_backup/snapshot_1/_restore?pretty
  5. Reopen your indices:
    curl -XPOST http://localhost:9200/knx-bus/_open

It is important that your the elasticsearch version on your importing party is compatible with the one exporting data, i.e., in this case your local machine has to be the same version or newer. If not, you need to upgrade elasticsearch first. The official documentation says:

The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it’s possible to restore a snapshot made from one cluster into another cluster. All that is required is registering the repository containing the snapshot in the new cluster and starting the restore process. The new cluster doesn’t have to have the same size or topology. However, the version of the new cluster should be the same or newer than the cluster that was used to create the snapshot.

2 Comments

Installing Theano and CUDA on Mac OS X

I started trying Theano today and wanted to use the GPU (NVIDIA GeForce GT 750M 2048 MB) on my Mac. Here’s a brief instruction on how to use the GPU on Mac, largely following the instructions from http://deeplearning.net/software/theano/install.html#mac-os.

Install Theano:

$ pip install Theano

Download and install CUDA: https://developer.nvidia.com/cuda-downloads

Put the following lines into your ~/.bash_profile:

# Theano and CUDA
PATH="/Developer/NVIDIA/CUDA-7.5/bin/:$PATH"
export LD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-7.5/lib/
export CUDA_ROOT=/Developer/NVIDIA/CUDA-7.5/
export THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32'

Note that the PATH line is necessary. Otherwise you may see the following message:

ERROR (theano.sandbox.cuda): nvcc compiler not found on $PATH. Check your nvcc installation and try again.

Configure Theano:

$ cat .theanorc 
[gcc]
cxxflags = -L/usr/local/lib -L/Developer/NVIDIA/CUDA-7.5/lib/

Test if GPU is used:

$ cat check.py 
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print('Used the cpu')
else:
    print('Used the gpu')

$ THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 time python check.py 
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)]
Looping 1000 times took 1.743682 seconds
Result is [ 1.23178029  1.61879337  1.52278066 ...,  2.20771813  2.29967761
  1.62323284]
Used the cpu
        2.47 real         2.19 user         0.27 sys
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 time python check.py 
Using gpu device 0: GeForce GT 750M
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 1.186971 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
  1.62323296]
Used the gpu
        2.09 real         1.59 user         0.41 sys

A more realistic example:

$ cat lr.py 
import numpy
import theano
import theano.tensor as T
rng = numpy.random

N = 400
feats = 784
D = (rng.randn(N, feats).astype(theano.config.floatX),
rng.randint(size=N,low=0, high=2).astype(theano.config.floatX))
training_steps = 10000

# Declare Theano symbolic variables
x = T.matrix("x")
y = T.vector("y")
w = theano.shared(rng.randn(feats).astype(theano.config.floatX), name="w")
b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX), name="b")
x.tag.test_value = D[0]
y.tag.test_value = D[1]

# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one
prediction = p_1 > 0.5 # The prediction that is done: 0 or 1
xent = -y*T.log(p_1) - (1-y)*T.log(1-p_1) # Cross-entropy
cost = xent.mean() + 0.01*(w**2).sum() # The cost to optimize
gw,gb = T.grad(cost, [w,b])

# Compile expressions to functions
train = theano.function(
            inputs=[x,y],
            outputs=[prediction, xent],
            updates=[(w, w-0.01*gw), (b, b-0.01*gb)],
            name = "train")
predict = theano.function(inputs=[x], outputs=prediction,
            name = "predict")

if any([x.op.__class__.__name__ in ['Gemv', 'CGemv', 'Gemm', 'CGemm'] for x in
        train.maker.fgraph.toposort()]):
    print('Used the cpu')
elif any([x.op.__class__.__name__ in ['GpuGemm', 'GpuGemv'] for x in
          train.maker.fgraph.toposort()]):
    print('Used the gpu')
else:
    print('ERROR, not able to tell if theano used the cpu or the gpu')
    print(train.maker.fgraph.toposort())

for i in range(training_steps):
    pred, err = train(D[0], D[1])

print("target values for D")
print(D[1])

print("prediction on D")
print(predict(D[0]))
$ THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 time python lr.py 
Used the cpu
target values for D
[ 1.  1.  0.  1.  0.  0.  0.  0.  0.  1.  1.  0.  0.  0.  0.  0.  0.  1.
  1.  0.  0.  1.  0.  0.  1.  1.  0.  1.  1.  1.  1.  0.  1.  1.  0.  1.
  0.  0.  0.  0.  0.  1.  0.  0.  0.  1.  1.  0.  1.  1.  1.  0.  1.  0.
  0.  0.  0.  0.  0.  1.  0.  1.  0.  0.  0.  1.  1.  1.  0.  0.  1.  1.
  1.  1.  0.  0.  0.  1.  0.  0.  1.  1.  0.  0.  1.  1.  1.  1.  0.  1.
  0.  0.  0.  0.  1.  0.  0.  1.  1.  1.  0.  0.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  0.  1.  1.  0.  0.  1.  0.  0.  0.  1.  0.  1.  1.  1.
  1.  0.  0.  1.  0.  1.  1.  1.  1.  1.  1.  1.  1.  1.  0.  1.  1.  0.
  1.  0.  1.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  0.
  1.  0.  1.  0.  0.  1.  0.  0.  1.  1.  1.  1.  0.  1.  0.  0.  1.  0.
  0.  0.  1.  1.  1.  1.  1.  1.  1.  0.  1.  1.  1.  0.  1.  0.  1.  0.
  0.  1.  1.  0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  1.  0.  1.  0.  1.
  1.  0.  1.  1.  1.  0.  0.  1.  1.  1.  1.  0.  0.  0.  1.  1.  0.  0.
  1.  0.  0.  0.  0.  1.  1.  1.  0.  1.  1.  1.  0.  1.  0.  0.  0.  0.
  0.  1.  1.  1.  1.  1.  1.  0.  0.  1.  1.  1.  0.  1.  0.  1.  0.  1.
  1.  0.  0.  0.  1.  1.  0.  0.  1.  0.  0.  0.  0.  1.  0.  0.  0.  1.
  0.  1.  0.  1.  1.  0.  1.  1.  0.  0.  0.  0.  1.  0.  0.  0.  0.  1.
  0.  1.  0.  0.  1.  1.  0.  0.  1.  1.  0.  1.  0.  1.  0.  0.  1.  1.
  0.  1.  1.  0.  0.  1.  1.  0.  0.  1.  0.  1.  1.  0.  0.  0.  1.  0.
  0.  0.  1.  0.  0.  0.  0.  1.  1.  0.  1.  1.  1.  0.  1.  1.  1.  1.
  1.  0.  0.  1.  0.  0.  0.  0.  1.  1.  0.  0.  0.  0.  0.  1.  1.  1.
  0.  1.  1.  1.  0.  0.  0.  0.  1.  1.  1.  0.  0.  0.  0.  1.  0.  0.
  1.  1.  0.  1.]
prediction on D
[1 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 1 0 1 1 0 1 0
 0 0 0 0 1 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 1 1 0 0 1 1 1 1
 0 0 0 1 0 0 1 1 0 0 1 1 1 1 0 1 0 0 0 0 1 0 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1
 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 1 0 1 1 0 1 0 1 1
 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 1 1 0 1 0 0 1 0 0 0 1 1 1
 1 1 1 1 0 1 1 1 0 1 0 1 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1 1 0 1 1 1 0
 0 1 1 1 1 0 0 0 1 1 0 0 1 0 0 0 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 1 1 1 1 1 1
 0 0 1 1 1 0 1 0 1 0 1 1 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 1
 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 1 0 0 1 1 0 1 0 1 0 0 1 1 0 1 1 0 0 1 1 0 0
 1 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 1 1 1 0 1 1 1 1 1 0 0 1 0 0 0 0 1 1
 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 1 0 0 1 1 0 1]
        8.92 real         8.24 user         1.14 sys
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 time python lr.py 
Using gpu device 0: GeForce GT 750M
Used the gpu
target values for D
[ 1.  0.  0.  0.  0.  1.  0.  0.  1.  1.  0.  0.  1.  1.  0.  0.  1.  1.
  0.  0.  0.  1.  1.  0.  1.  1.  1.  0.  0.  1.  1.  1.  1.  1.  1.  0.
  0.  1.  0.  0.  1.  1.  0.  0.  1.  1.  0.  1.  0.  1.  1.  0.  1.  1.
  1.  0.  1.  1.  0.  0.  0.  1.  1.  1.  1.  1.  0.  0.  1.  1.  0.  1.
  1.  1.  1.  0.  1.  1.  0.  1.  1.  1.  0.  0.  0.  1.  1.  0.  0.  0.
  1.  0.  1.  0.  0.  0.  0.  1.  1.  1.  1.  0.  0.  1.  0.  1.  0.  1.
  1.  0.  1.  1.  0.  0.  0.  0.  1.  0.  0.  1.  0.  0.  0.  1.  0.  1.
  1.  1.  0.  0.  0.  1.  0.  1.  0.  1.  0.  1.  1.  1.  1.  1.  0.  1.
  1.  0.  1.  1.  0.  0.  1.  0.  1.  0.  0.  1.  0.  0.  1.  0.  0.  0.
  1.  0.  0.  1.  1.  1.  1.  0.  0.  0.  1.  1.  1.  0.  1.  0.  0.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  0.  0.  0.  0.  0.  1.  1.  1.  0.  1.
  0.  1.  0.  1.  1.  1.  1.  0.  0.  0.  1.  1.  1.  1.  0.  0.  0.  1.
  0.  1.  1.  1.  0.  1.  1.  1.  0.  0.  0.  0.  1.  0.  1.  0.  0.  1.
  0.  0.  1.  1.  0.  1.  0.  1.  1.  1.  0.  0.  1.  1.  0.  0.  0.  0.
  1.  0.  0.  1.  0.  0.  0.  0.  1.  0.  0.  1.  1.  1.  1.  1.  1.  1.
  0.  1.  1.  0.  0.  0.  1.  0.  1.  1.  0.  0.  0.  0.  0.  0.  1.  0.
  1.  1.  1.  0.  0.  1.  0.  1.  0.  0.  1.  0.  1.  0.  0.  1.  0.  0.
  1.  1.  0.  1.  1.  1.  0.  0.  0.  0.  0.  1.  0.  1.  0.  0.  0.  1.
  0.  0.  1.  1.  0.  1.  1.  0.  1.  1.  1.  0.  1.  1.  0.  0.  0.  0.
  0.  0.  1.  1.  1.  1.  1.  1.  1.  1.  0.  1.  1.  1.  0.  1.  0.  1.
  1.  1.  0.  1.  1.  0.  1.  1.  1.  0.  0.  1.  1.  0.  0.  0.  0.  0.
  1.  0.  0.  1.  1.  1.  0.  1.  0.  0.  1.  1.  0.  1.  1.  0.  1.  1.
  0.  0.  1.  0.]
prediction on D
[1 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 0 0
 1 0 0 1 1 0 0 1 1 0 1 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 1 1 1 0 0 1 1 0 1 1 1
 1 0 1 1 0 1 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 1 1 1 1 0 0 1 0 1 0 1 1 0 1
 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 0 1 0 1 1 1 1 1 0 1 1 0 1 1
 0 0 1 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 1 1 1 0 0 0 1 1 1 0 1 0 0 1 1 1 1 1 1
 1 1 1 0 0 0 0 0 1 1 1 0 1 0 1 0 1 1 1 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 0 1
 1 1 0 0 0 0 1 0 1 0 0 1 0 0 1 1 0 1 0 1 1 1 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0
 0 1 0 0 1 1 1 1 1 1 1 0 1 1 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 1 1 1 0 0 1 0 1
 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 1 1 0 1 1 0 1
 1 1 0 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 0 1 1 0 1 1 1 0
 0 1 1 0 0 0 0 0 1 0 0 1 1 1 0 1 0 0 1 1 0 1 1 0 1 1 0 0 1 0]
       19.78 real        17.61 user         1.24 sys

So it seems this GPU does not outperform the CPU. Well,GT 750M may not be the best GPU you can get… Someone else here has a similar experience.

 

5 Comments

NumPy’s ndarray indexing

In NumPy a new kind of array is provided: n-dimensional array or ndarray. It’s usually fixed-sized and accepts items of the same type and size. For example, to define a 2×3 matrix:

import numpy as np
a = np.array([[1,2,3,], [4,5,6]], np.int32)

When indexing ndarray, it supports “array indexing” other than single element indexing.  (See http://docs.scipy.org/doc/numpy/user/basics.indexing.html)

It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance.

So you basically can do the following:

a = np.array([1, 2, 3], np.int32)
a[np.array([0, 2])) # Fetch the first the third elements, returns np.array([1, 3])
a[np.array([True, False, True])] # Same as the line above

Besides, when you do equals operation on ndarrays, another ndarray is returned by comparing each element:

a = np.array([1, 2, 3], np.int32)
a == 2 # Returns array([False,  True, False], dtype=bool)
a != 2 # Returns array([ True, False,  True], dtype=bool)
a[a != 2] # Returns a sub array that excludes elements with a value 2, in this case array([1, 3], dtype=int32)
Leave a Comment

MapReduce in MongoDB

http://docs.mongodb.org/manual/core/map-reduce/

http://docs.mongodb.org/manual/reference/command/mapReduce/

> db.lattern_money_record.mapReduce( function() { emit(this.quantity, 1) }, function(key, values) { return Array.sum(values) }, {   query: {'quantity': {$gt: 500}}, out: {inline: 1} } )
{
	"results" : [
		{
			"_id" : 550,
			"value" : 3
		},
		{
			"_id" : 570,
			"value" : 1
		},
		{
			"_id" : 580,
			"value" : 1
		},
		{
			"_id" : 583,
			"value" : 1
		},
		{
			"_id" : 587,
			"value" : 1
		},
		{
			"_id" : 600,
			"value" : 2
		},
		{
			"_id" : 660,
			"value" : 1
		},
		{
			"_id" : 700,
			"value" : 2
		},
		{
			"_id" : 800,
			"value" : 5
		},
		{
			"_id" : 900,
			"value" : 2
		},
		{
			"_id" : 924,
			"value" : 1
		},
		{
			"_id" : 949,
			"value" : 1
		},
		{
			"_id" : 980,
			"value" : 1
		},
		{
			"_id" : 990,
			"value" : 1
		},
		{
			"_id" : 1000,
			"value" : 12
		}
	],
	"timeMillis" : 36,
	"counts" : {
		"input" : 35,
		"emit" : 35,
		"reduce" : 6,
		"output" : 15
	},
	"ok" : 1,
}

The MapReduce code I used to analyze the 20 million hotel reservation records:

def get_aggregation(collection):
    '''
    1. Get unique set of people
    2. Get most frequent users
    3. Get aggregation by location of birth, age, month and day of birth
    '''
    # Emit multiple times in mapper function:
    # http://docs.mongodb.org/manual/reference/command/mapReduce/
    mapper = Code('''
                  function() {
                    function validate_rid(id) {
                        // From: https://gist.github.com/foxwoods/1817822
                        // 18位身份证号
                        // 国家标准《GB 11643-1999》
                        function rid18(id) {
                            if(! /\d{17}[\dxX]/.test(id)) {
                                return false;
                            }
                            var modcmpl = function(m, i, n) { return (i + n - m % i) % i; },
                                f = function(v, i) { return v * (Math.pow(2, i-1) % 11); },
                                s = 0;
                            for(var i=0; i<17; i++) {
                                s += f(+id.charAt(i), 18-i);
                            }
                            var c0 = id.charAt(17),
                                c1 = modcmpl(s, 11, 1);
                            return c0-c1===0 || (c0.toLowerCase()==='x' && c1===10);
                        }

                        // 15位身份证号
                        // 2013年1月1日起将停止使用
                        // http://www.gov.cn/flfg/2011-10/29/content_1981408.htm
                        function rid15(id) {
                            var pattern = /[1-9]\d{5}(\d{2})(\d{2})(\d{2})\d{3}/,
                                matches, y, m, d, date;
                            matches = id.match(pattern);
                            y = +('19' + matches[1]);
                            m = +matches[2];
                            d = +matches[3];
                            date = new Date(y, m-1, d);
                            return (date.getFullYear()===y && date.getMonth()===m-1 && date.getDate()===d);
                        }

                        // return rid18(id) || rid15(id);
                        try {
                            ret = rid18(id) || rid15(id);
                            return ret;
                        } catch (err) {
                            return false;
                        }
                    }

                    function validateEmail(email) {
                        // http://stackoverflow.com/questions/46155/validate-email-address-in-javascript
                        var re = /^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
                        return re.test(email);
                    }

                    var str = this.CtfId;
                    if (str && validate_rid(str)) {
                        var prov = parseInt(str.slice(0, 2));
                        var year, month, day, sex;
                        if (str.length == 15) {
                            year = parseInt('19' + str.slice(6, 8));
                            month = parseInt(str.slice(8, 10));
                            day = parseInt(str.slice(10, 12));
                            sex = parseInt(str.slice(14, 15)) % 2 ? 'M' : 'F';
                        } else {
                            year = parseInt(str.slice(6, 10));
                            month = parseInt(str.slice(10, 12));
                            day = parseInt(str.slice(12, 14));
                            sex = parseInt(str.slice(16, 17)) % 2 ? 'M' : 'F';
                        }
                        var age = 2013 - year;
                        var valid_provs = [11, 12, 13, 14, 15,
                            21, 22, 23, 31, 32, 33, 34, 35, 36, 37,
                            41, 42, 43, 44, 45, 46,
                            50, 51, 52, 53, 54,
                            61, 62, 63, 64, 65,
                            71, 81, 82, 91];
                        if (age <= 0 || age > 100 ||
                            month <=0 || month > 12 ||
                            day <= 0 || day > 31 ||
                            valid_provs.indexOf(prov) == -1) {
                            emit('Corrupted', 1);
                        } else {
                            // emit('Province ' + prov, 1);
                            // emit('Age ' + age, 1);
                            // emit('Month ' + month, 1);
                            // emit('Day ' + day, 1);
                            // emit('Sex ' + sex, 1);
                            // emit('Prov ' + prov + ' Sex ' + sex, 1);
                            // if (this.Address && this.Address.length > 3) {
                            //     var cur_prov = this.Address.slice(0, 3);
                            //     emit('From ' + prov + ' to ' + cur_prov, 1);
                            // }

                            // var email = this.EMail;
                            // if (email && validateEmail(email)) {
                            //     var idx = email.lastIndexOf('@');
                            //     var domain = email.slice(idx + 1);
                            //     emit(domain.toLowerCase(), 1);
                            // }

                            if (prov == 32 && sex == 'M') {
                                emit(str, 1);
                            }
                            // if (prov == 32 && sex == 'F') {
                            //     emit(str, 1);
                            // }
                        }
                    } else {
                        emit('Corrupted', 1);
                    }
                  }''')
    reducer = Code('''
                   function(key, values) {
                    return Array.sum(values);
                   }''')
    result = collection.map_reduce(
        mapper, reducer, 'aggregation', query={'CtfTp': 'ID'}
    )
    return result

 

Leave a Comment