Flask example 1

from flask import Flask, request, render_template

app = Flask(__name__)

def my_form():
    return render_template('my-form.html')

@app.route('/', methods=['POST'])
def my_form_post():
    text = request.form['text']
    processed_text = text.upper()
    return processed_text

if __name__ == '__main__':
    app.run(host='localhost', port=80)

Request registration plate data:

import requests

url = "https://opendata.rdw.nl/resource/m9d7-ebf2.json"
plate = 'GB224X' #str(input("Enter registration number plate..."))
querystring = {}
querystring["kenteken"] = plate
#querystring = {"kenteken":"GB224X"}

headers = {
    'User-Agent': "PostmanRuntime/7.20.1",
    'Accept': "*/*",
    'Cache-Control': "no-cache",
    'Postman-Token': "43d07009-34de-4c57-99d9-af76e648cd9b,f85a65e3-2d6f-47df-813e-d4d592abff65",
    'Host': "opendata.rdw.nl",
    'Accept-Encoding': "gzip, deflate",
    'Connection': "keep-alive"

def rdw():
    response = requests.request("GET", url, headers=headers, params=querystring)
    data = response.text
    return data

Amazon Redshift – What you need to think before defining primary key

Amazon Redshift suggests to define primary key or foreign key constraint wherever applicable. But they are information only. Redshift does not enforce these constraints. In other words, a column with primary key accepts duplicate values as well as a foreign key column also allows such a value that does not exists in the referenced table. So why does Redshift consider defining primary or foreign key as their best practice list? Because query optimizer uses them to choose the most suitable execution plan. But you need to be very careful while defining these constraint in your table. Let us see why with some real-life example.

Streaming data with Amazon Kinesis


At Sqreen we use Amazon Kinesis service to process data from our agents in near real-time. This kind of processing became recently popular with the appearance of general use platforms that support it (such as Apache Kafka). Since these platforms deal with the stream of data, such processing is commonly called the “stream processing”. It’s a departure from the old model of analytics that ran the analysis in batches (hence its name “batch processing”) rather than online.