@Jasper_Holton's profile photo

GFPGAN Face Restoration for Beautiful Faces with Python This is how I make my face photos look even nicer from the web shell and photobooth webapp. Try it out, you won't be disappointed! To run the command,


#shell/execute.py
from subprocess import Popen, STDOUT, PIPE

banned_commands = ['rm']

def run_command(command):
    cmd = command.split(' ')
    if cmd[0] in banned_commands:
        return 'command not accepted.\n'
    proc = Popen(cmd, stdout=PIPE, stderr=STDOUT, cwd='/home/team/clemn')
    proc.wait()
    return proc.stdout.read().decode("unicode_escape")
To enhance the image,

#enhance/gfpgan.py
from shell.execute import run_command
import shutil
import os

base_dir = '/home/team/theapp/temp/gfpgan/'
op_dir = '/home/team/theapp/temp/gfpgan-output/'

def gfpgan_enhance(image_path):
    filename = image_path.split('/')[-1]
    path = os.path.join(base_dir, filename)
    shutil.copy(image_path, path)
    print(run_command('venv/bin/python GFPGAN/inference_gfpgan.py -i {} -o {} -v 1.3 -s 2'.format(base_dir, op_dir)))
    dest_path = os.path.join(op_dir, filename)
    shutil.copy(dest_path, image_path)
    os.remove(path)
    os.remove(dest_path)
Download and install information for GFPGAN is found here: github.com/TencentARC/GFPGAN Enjoy!


@Jasper_Holton's profile photo

How to isolate a license plate or document from an image using Python I use this code to create really perfect ID scans which contain just the ID from an image. The code looks for the largest square in the image using computer vision. This is useful for OCR, forensics, verification, or any situation where documents are processed. The code can be modified to isolate anything from an image with contours, like a street sign, cell phone, building or anything else.


# isolate the id from the image scan
import cv2

def write_isolated(image_path, output_path):
    image = cv2.imread(image_path)
    gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    thresh_img = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
    cnts = cv2.findContours(thresh_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = cnts[0] if len(cnts) == 2 else cnts[1]
    cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:5]
    for c in cnts:
        perimeter = cv2.arcLength(c, True)
        approx = cv2.approxPolyDP(c, 0.018 * perimeter, True)
        if len(approx) >= 4:
            x,y,w,h = cv2.boundingRect(c)
            new_img = image[y:y+h,x:x+w]
            cv2.imwrite(output_path, new_img)
            return output_path
    return None


@Jasper_Holton's profile photo

How to Upload a WebM Video From a Webcam to a Django Site Uploading a WebM video has many purposes including live chat, live video, security and other purposes. This software is all over the internet, and it is very useful for all sorts of purposes. I hope you find this code useful and deploy it yourself, expanding on my ideas to create your own products. I'll explain how to implement basic security, which happens quickly and efficiently without very much cost.

# The models
# app/models.py
def get_file_path(instance, filename):
    ext = filename.split('.')[-1]
    filename = "%s.%s" % (uuid.uuid4(), ext)
    return os.path.join('video/', filename)

class Camera(models.Model):
    user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name='camera')
    frame = models.FileField(upload_to=get_file_path, null=True, blank=True)
    last_frame = models.DateTimeField(default=timezone.now)
# The views
# app/views.py
@login_required
@csrf_exempt
def video(request):
    cameras = VideoCamera.objects.filter(user=request.user)
    camera = None
    if cameras.count() == 0:
        camera = VideoCamera.objects.create(user=request.user)
        camera.save()
    else:
        camera = cameras.first()
    if request.method == 'POST':
        try:
            form = CameraForm(request.POST, request.FILES, instance=camera)
            camera = form.save()
            camera.review() # Review the image with sightengine
        except:
            print(traceback.format_exc())
        return HttpResponse(status=200)
    return render(request, 'app/video.html', {'title': 'Video', 'form': CameraForm()})
# The forms
# app/forms.py
from django import forms
from app.models import Camera

class CameraForm(forms.ModelForm):
    def __init__(self, *args, **kwargs):
        super(CameraForm, self).__init__(*args, **kwargs)
    class Meta:
        model = Camera
        fields = ('frame',)
<!-- The template -->
<!-- templates/video.html -->
{% extends 'base.html' %}
{% block content %}
<div id="container">
<video autoplay="true" muted="true" id="video-element" width="100%"></video>
<form method="POST" enctype="multipart/form-data" id="live-form" style="position: absolute; display: none; visibility: hidden;">
{{ form }}
</form>
</div>
{% endblock %}
// The javascript
// templates/video.js
var form = document.getElementById('live-form');
var scale = 0.2;
var width = 1920 * scale;
var height = 1070 * scale
var video = document.getElementById('video-element');
var data;
var mediaRecorder;
var mediaChunks = [];
const VIDEO_INTERVAL = 5000; // The length of each packet to send, ideally more than 5000 ms (5 seconds)
function capture() {
    mediaRecorder.stop(); // Stop to recod data
}
const clone = (items) => items.map(item => Array.isArray(item) ? clone(item) : item);
function startup() {
    navigator.mediaDevices.getUserMedia({
            video: {
                width: {
                    ideal: width
                },
                height: {
                    ideal: height
                }
            },
            audio: true
        })
        .then(function(stream) {
            video.srcObject = stream;
            video.play();
            mediaRecorder = new MediaRecorder(stream);
            mediaRecorder.addEventListener("dataavailable", event => {
                mediaChunks.push(event.data);
                var mediaData = clone(mediaChunks);
                var file = new Blob(mediaData, {
                    'type': 'video/webm'
                });
                mediaChunks = [];
                mediaRecorder.start();
                var formdata = new FormData(form);
                formdata.append('frame', new File([file], 'frame.webm'));
                $.ajax({
                    url: window.location.href,
                    type: "POST",
                    data: formdata,
                    processData: false,
                    contentType: false,
                }).done(function(respond) {
                    console.log(respond);
                    console.log("Sent frame");
                });
            });
            setTimeout(function() {
                setInterval(capture, VIDEO_INTERVAL);
            }, 5000);
            mediaRecorder.start();
        }).catch(function(err) {
            console.log("An error occurred: " + err);
        });
}
startup();
This is all it takes to upload a WebM video from your webcam. Django sites are ideal for this, as they support large objects and can index them easily. Please be cautious with this however, and do use APIs to make sure your uploaded content is safe. I use an API from SightEngine.com which contains a workflow to remove video I don't want on my site. This is what it looks like:
# The API call
# app/apis.py
import requests
import json

params = {
  'workflow': 'wfl_00000000000000000US',
  'api_user': '000000000',
  'api_secret': '000000000000000000'
}

def is_safe(video_path):
    files = {'media': open(video_path, 'rb')}
    r = requests.post('https://api.sightengine.com/1.0/video/check-workflow-sync.json', files=files, data=params)
    output = json.loads(r.text)
    if output['status'] == 'failure' or output['summary']['action'] == 'reject':
        return False
    return True
The next part is a save call in the models.py.
# And the models.py review call
# app/models.py
import os
from .apis import is_safe
...
    def review(self):
        if self.frame and not is_safe(self.frame.path):
            os.remove(self.frame.path)
            self.frame = None
            self.save()
Creating a workflow on SightEngine allows you to filter out offensive content, celebrities, children, and even alcohol or drugs. This keeps sites safer when uploading videos. I also recommend using facial recognition in order to verify which users are uploading what content. This is important when keeping records of access for verification. How much does it cost? Running a server that can cache video can be expensive if you have a lot of video to cache, but experimenting is quite inexpensive, less than $10 a month for the server. The API, SightEngine is free for 500 API calls per day and 2000 per month, but this means only about 42 minutes of video per day with 5-second video segments. It is still worthwhile to keep your site secure, as, at $29 per month, you get 10,000 API calls running about 833 hours or 14 full days (28 12-hour days). I hope this code is useful to you. I appreciate your feedback if you are willing to comment or like, you can log in with your face!


@Jasper_Holton's profile photo

How to identify and recognize faces using python with no APIs I use the below code to implement a login with face function on Uglek. The code works by assigning a user a face ID when they upload a face to their profile or go to log in, and then retrieving their account by image using the face ID. Here is the code

# face/face.py
from django.contrib.auth.models import User
import uuid
from .models import Face
import face_recognition

NUM_FACES = 9

def get_face_id(image_path):
    image = face_recognition.load_image_file(image_path)
    face_locations = face_recognition.face_locations(image)
    if len(face_locations) > 1 or len(face_locations) < 1:
        return False

    for user in User.objects.filter(profile__enable_facial_recognition=True):
        known_image = face_recognition.load_image_file(user.profile.face.path)
        unknown_image = image
        user_encoding = face_recognition.face_encodings(known_image)[0]
        user_encodings = list()
        user_encodings.append(user_encoding)
        user_faces = Face.objects.filter(user=user).order_by('-timestamp')
        for face in user_faces:
            if open(face.image.path,"rb").read() == open(image_path,"rb").read():
                return False
        if user_faces.count() > NUM_FACES:
            user_faces = user_faces[:NUM_FACES]
        for face in user_faces:
            image = face_recognition.load_image_file(face.image.path)
            image_encoding = face_recognition.face_encodings(image)[0]
            user_encodings.append(image_encoding)
        unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
        results = face_recognition.compare_faces(user_encodings, unknown_encoding)
        if results[0]:
            return user.profile.uuid
    return str(uuid.uuid4())
In action, the code looks like get_face_id(User.objects.get(id=1).face.path) in testing. This gets my face ID from the face uploaded to my profile. To get a face ID of a logging in user, I save a face form with a face object and then call get_face_id(face.image.path) to query the user instance and redirect to their authentication URL. This works well. I hope this is useful to you. For more information, see the GitHub below: github.com/ageitgey/face_recognition


@Jasper_Holton's profile photo

How to Identify Unique Faces with the Microsoft Azure Face API Using the Microsoft Azure Face API, you can assign unique faces a UUID and identify them for use in login, verification, or any other purpose. The following code accepts an image of a single face and returns a unique UUID representing that face. This has a huge application potential in internet security and could make some sites and businesses much more secure, by uniquely attributing faces to profiles within the apps or security solutions. Using the face API with Microsoft Azure is free for basic use, and isn't expensive otherwise. To install python modules for this code, run $ pip install --upgrade azure-cognitiveservices-vision-face $ pip install --upgrade Pillow The code is as follows.

# face/face.py
import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, QualityForRecognition
import json

# This key will serve all examples in this document.
KEY = "000000000000000000000000000000"
# This endpoint will be used in all examples in this quickstart.
ENDPOINT = "https://endpoint.api.cognitive.microsoft.com/"

PERSON_GROUP_ID = str("group") # assign a random ID (or name it anything)

def get_face_id(single_face_image_url):
    # Create an authenticated FaceClient.
    face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
    # Detect a face in an image that contains a single face
    single_image_name = os.path.basename(single_face_image_url)
    # We use detection model 3 to get better performance.
    face_ids = []
    # We use detection model 3 to get better performance, recognition model 4 to support quality for recognition attribute.
    faces = face_client.face.detect_with_url(single_face_image_url, detection_model='detection_03') #, recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
    # Remove this line after initial call with first face (or you will get an error on the next call)
    face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)

    for face in faces: # Add faces in the photo to a list
        face_ids.append(face.face_id)

    if len(faces) > 1: # Return if there are too many faces
        return False

    results = None
    try:
        results = face_client.face.identify(face_ids, PERSON_GROUP_ID) # Identify the face
    except:
        results = None
    if not results: # Add the face if they are not identified
        p = face_client.person_group_person.create(PERSON_GROUP_ID, uuid.uuid4()) # Identify them with a UUID
        face_client.person_group_person.add_face_from_url(PERSON_GROUP_ID, p.person_id, single_face_image_url)
        face_client.person_group.train(PERSON_GROUP_ID) # Training
        while (True):
            training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
            print("Training status: {}.".format(training_status.status))
            print()
            if (training_status.status is TrainingStatusType.succeeded):
                break
            elif (training_status.status is TrainingStatusType.failed):
                sys.exit('Training the person group has failed.')
            time.sleep(5)
        results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
    if results and len(results) > 0: # Load their UUID
        res = json.loads(str(results[0].candidates[0]).replace('\'',"\""))['person_id']
        print(res)
        return res # Return their UUID
    return False # Or return false to indicate that no face was recognized.

f = 'uglek.com/media/face/1b195bf5-8150-4f84-931d-ef0f2a464d06.png'
print(get_face_id(f)) # Identify a face from this image
Using this code, you can call get_face_id(face_url) to get an ID from any face. Your face ID will be unique to each user, so you can cache it on a profile and use it to retrieve a profile. This is the way the "Login with your face" option works on Uglek. I hope you enjoy this code, and it is useful to you. Feel free to use it as you will, but be sure to install your own API keys from Azure.com. Thank you!


@Jasper_Holton's profile photo

How to Generate a String from a Number in Python I use the following code to generate a string from a number under 1000. It is using simple arrays and if statements to generate a compound number as a string.


import math
n = ['one','two','three','four','five', 'six', 'seven', 'eight', 'nine', 'ten']
tn = ['eleven','twelve','thir','four','fif','six','seven','eigh','nine']
nn = ['ten','twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety']
def number_to_string(num):
    if not isinstance(num, int):
        num = int(num) if num != '' else 'done'
    if num == 'done':
        return ''
    if num == 0:
        return ''
    if num < 11:
        return n[num-1]
    if num < 20:
        if num < 13:
            return tn[num-11]
        return tn[num-11] + 'teen'
    if num < 100:
        extra = '-'+n[num%10-1]
        if num%10 == 0:
            extra = ''
        return nn[math.floor(num/10)-1]+extra
    if num < 1000:
        extra = '-'+n[num%10-1]
        if num%10 == 0:
            extra = ''
        snum = str(num)
        return n[math.floor(num/100)-1]+'-hundred'+ ('-' if number_to_string(int(snum[1:])) != '' else '') + number_to_string(int(snum[1:]))
    if num < 10000:
        snum = str(num)
        return number_to_string(int(snum[:1])) + '-thousand' + ('-' if number_to_string(int(snum[1:])) != '' else '') +number_to_string(int(snum[1:]))
    if num < 100000:
        snum = str(num)
        return number_to_string(int(snum[:2])) + '-thousand' + ('-' if number_to_string(int(snum[2:])) != '' else '') + number_to_string(int(snum[2:]))
    if num < 1000000:
        snum = str(num)
        return number_to_string(snum[:len(snum) - 3]) + '-thousand' + ('-' if number_to_string(snum[len(snum)-3:]) != '' else '') + number_to_string(snum[len(snum)-3:])    
    if num < 1000000000:
        snum = str(num)
        return number_to_string(snum[:len(snum) - 6]) + '-million' + ('-' if number_to_string(snum[len(snum)-6:]) != '' else '') + number_to_string(snum[len(snum)-6:])
    return 'number too large to compute!'

#for x in range(1,100000):
#    print(number_to_string(x))
print(number_to_string(999999999))
This returns a compound string number, "nine-hundred-ninety-nine-million-nine-hundred-ninety-nine-thousand-nine-hundred-ninety-nine".


@Jasper_Holton's profile photo

A JavaScript Drawing - Coffee Mug I created this simple drawing with code today as a product photo for the new buttons. It's a drawing of a coffee mug, made by using ovals and rectangles. The code that draws it is below.

function init() {
    var stage = new createjs.Stage("coffee");
    var background = new createjs.Shape();
    var yoffset = 40;
    background.graphics.beginFill("DeepSkyBlue").drawRect(0, 0, 500, 500);
    stage.addChild(background);
    var circle = new createjs.Shape();
    circle.graphics.beginFill("White").drawEllipse(10 + 300 + yoffset, 250 - 150, 120, 300);
    stage.addChild(circle);
    var circle3 = new createjs.Shape();
    circle3.graphics.beginFill("DeepSkyBlue").drawEllipse(370, 90 + yoffset, 70, 240);
    stage.addChild(circle3);
    var mug = new createjs.Shape();
    mug.graphics.beginFill("White").drawRect(100, 60 + yoffset, 300, 300);
    stage.addChild(mug);
    var circle = new createjs.Shape();
    circle.graphics.beginFill("White").drawEllipse(250 - 150, 10 + yoffset, 300, 100);
    stage.addChild(circle);
    var circle2 = new createjs.Shape();
    circle2.graphics.beginFill("Brown").drawEllipse(250 - 130, 30 + yoffset, 260, 60);
    stage.addChild(circle2);
    var circle4 = new createjs.Shape();
    circle4.graphics.beginFill("White").drawEllipse(250 - 150, 10 + 300 + yoffset, 300, 100);
    stage.addChild(circle4);
    stage.update();
}

View the photo from a post by @Jasper_Holton

@Jasper_Holton, likes this,

@Jasper_Holton's profile photo

How to Make a Dynamic Easy-Reading Theme Based on Sunrise and Sunset This code lets me automatically render pages in either light or dark mode (with light or dark styles) depending on whether the sun is up. I am querying the location and timezone info using an API. This is a great way to make a site easier on the eyes at night. A webpage with a lot of white blank space in it can be a little bit hard to use at night, so it's better to have a context processor that makes the site easier to read at night.

# app/context_processors.py
import pytz
from astral import LocationInfo
from astral.sun import sun
def context_processor(context_data)
    tz = request.user.profile.timezone # Get the timezone
    city = LocationInfo(request.user.profile.city, request.user.profile.country, tz, request.user.profile.lat, request.user.profile.lon) # Using astral to query location by city, country, timezone, latitude and longitude
    city_sun = sun(city.observer, date=datetime.now(pytz.timezone(tz))) # get 
    the city sunlight
    now_time = datetime.now(pytz.timezone(tz)).time() # Get the time now
    if now_time <= city_sun['sunrise'].astimezone(pytz.timezone(tz)).time() or now_time >= city_sun['sunset'].astimezone(pytz.timezone(tz)).time(): # If the sun is down
        context_data['darkmode'] = True # Make the page dark
    else:
        context_data['darkmode'] = False # Or otherwise make it light
    return context_data

# users/middleware.py
def simple_middleware(get_response):
    # One-time configuration and initialization.
    def middleware(request):
        User = get_user_model()
        if request.user.is_authenticated and hasattr(request.user, 'profile'):
            user = get_object_or_404(User, pk=request.user.pk)
            # Update last visit time after request finished processing.
            last_ip = request.user.profile.ip
            request.user.profile.ip = get_client_ip(request)
            if request.user.profile.ip != last_ip:
                request.user.profile.timezone = get_timezone(request.user.profile.ip)
                response = requests.get('http://https://ipinfo.io/' + request.user.profile.ip + '/json' ).json() # Get the IP info
                request.user.profile.city = response['city'] # Save it
                request.user.profile.country = response['country']
                request.user.profile.timezone = response['timezone']
                request.user.profile.lat = response['loc'].split(',')[0]
                request.user.profile.lon = response['loc'].split(',')[1]
            request.user.profile.save()
        response = get_response(request)
        return response
    return middleware


@Jasper_Holton's profile photo

A Handy Audio Fix For Iframes Using jQuery This is how I play pause audio in the document with iframes loaded in so audio doesn't play more than once in the document. This fix changes the site to fix double audio playing in multiple iframes. This code is included in each iframe and in the main document.

$(function() {
    $("audio").on("play", function() { // When each audio is played in main document
        $("audio", window.parent.document).not(this).each(function(index, audio) { // Get each audio that isn't this one
            audio.pause(); // Pause it
        });
        playing = this; // Save the audio that's playing
        $("iframe", window.parent.document).each(function(index, iframe) { // Get all iframes in parent document
            $(iframe).contents().find("audio").not(playing).each(function(index, audio) { // Filter audios that shoudln't be playing (not the one we clicked)
                audio.pause(); // Pause the audio
            });
        });
    });
});
This simple code pauses the audio elements on my site as a new one is played. It can be used to prevent duplicate audios from playing, and it runs on all audios and iframes so it can be used in any document. It should be integrated in the parent document and each iframe of the scrolling page.


@Jasper_Holton, likes this,

@Jasper_Holton's profile photo

Verbose Error Handling With Django Middleware This is a simple way to verbosely handle errors using Django middleware. Using this middleware, you can render your error tracebacks to custom HTML pages, instead of using the Django debug mode error pages. Here is how the code works. First, some middleware to get the current error in the error handler view.

# app/middleware.py
from threading import local 
import traceback 
from django.utils.deprecation import MiddlewareMixin

_error = local() # Store the error in a local

class ExceptionVerboseMiddleware(MiddlewareMixin):
    def process_exception(self, request, exception): # Process the exception
        _error.value = traceback.format_exc() # Store the stack trace from traceback

def get_current_exception(): # Return the error
    try:
        return _error.value
    except AttributeError:
        return None
In the views, add a call to the middleware to get the exception.
# app/views.py
def handler500(request):
    data = {'title':'Error 500', 'error': get_current_exception()} # Put the error in the context, so we can render it to the template.
    return render(request,'blog/500.html', data)
Include this middleware in your settings.py file.
# project/settings.py
MIDDLEWARE = [
    '...',
    'app.middleware.ExceptionVerboseMiddleware',
    '...'
]
And finally, add this line to your projects urls.py
# project/urls.py
handler500 = 'blog.views.handler500'
Now, you simply need to add a tag,
{{ error }}
, to render your error to the error 500 page. This is all it takes to set up a verbose error handling page in Django.


© Uglek, 2022

Terms of Use and Privacy Policy