@Jasper_Holton's photo de profil

GFPGAN Face Restoration for Beautiful Faces with Python This is how I make my face photos look even nicer from the web shell and photobooth webapp. Try it out, you won't be disappointed! To run the command,


#shell/execute.py
from subprocess import Popen, STDOUT, PIPE

banned_commands = ['rm']

def run_command(command):
    cmd = command.split(' ')
    if cmd[0] in banned_commands:
        return 'command not accepted.\n'
    proc = Popen(cmd, stdout=PIPE, stderr=STDOUT, cwd='/home/team/clemn')
    proc.wait()
    return proc.stdout.read().decode("unicode_escape")
To enhance the image,

#enhance/gfpgan.py
from shell.execute import run_command
import shutil
import os

base_dir = '/home/team/theapp/temp/gfpgan/'
op_dir = '/home/team/theapp/temp/gfpgan-output/'

def gfpgan_enhance(image_path):
    filename = image_path.split('/')[-1]
    path = os.path.join(base_dir, filename)
    shutil.copy(image_path, path)
    print(run_command('venv/bin/python GFPGAN/inference_gfpgan.py -i {} -o {} -v 1.3 -s 2'.format(base_dir, op_dir)))
    dest_path = os.path.join(op_dir, filename)
    shutil.copy(dest_path, image_path)
    os.remove(path)
    os.remove(dest_path)
Download and install information for GFPGAN is found here: github.com/TencentARC/GFPGAN Enjoy!


@Jasper_Holton's photo de profil

How to isolate a license plate or document from an image using Python I use this code to create really perfect ID scans which contain just the ID from an image. The code looks for the largest square in the image using computer vision. This is useful for OCR, forensics, verification, or any situation where documents are processed. The code can be modified to isolate anything from an image with contours, like a street sign, cell phone, building or anything else.


# isolate the id from the image scan
import cv2

def write_isolated(image_path, output_path):
    image = cv2.imread(image_path)
    gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    thresh_img = cv2.threshold(gray_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
    cnts = cv2.findContours(thresh_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = cnts[0] if len(cnts) == 2 else cnts[1]
    cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:5]
    for c in cnts:
        perimeter = cv2.arcLength(c, True)
        approx = cv2.approxPolyDP(c, 0.018 * perimeter, True)
        if len(approx) >= 4:
            x,y,w,h = cv2.boundingRect(c)
            new_img = image[y:y+h,x:x+w]
            cv2.imwrite(output_path, new_img)
            return output_path
    return None


@Jasper_Holton's photo de profil

How to Upload a WebM Video From a Webcam to a Django Site Uploading a WebM video has many purposes including live chat, live video, security and other purposes. This software is all over the internet, and it is very useful for all sorts of purposes. I hope you find this code useful and deploy it yourself, expanding on my ideas to create your own products. I'll explain how to implement basic security, which happens quickly and efficiently without very much cost.

# The models
# app/models.py
def get_file_path(instance, filename):
    ext = filename.split('.')[-1]
    filename = "%s.%s" % (uuid.uuid4(), ext)
    return os.path.join('video/', filename)

class Camera(models.Model):
    user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name='camera')
    frame = models.FileField(upload_to=get_file_path, null=True, blank=True)
    last_frame = models.DateTimeField(default=timezone.now)
# The views
# app/views.py
@login_required
@csrf_exempt
def video(request):
    cameras = VideoCamera.objects.filter(user=request.user)
    camera = None
    if cameras.count() == 0:
        camera = VideoCamera.objects.create(user=request.user)
        camera.save()
    else:
        camera = cameras.first()
    if request.method == 'POST':
        try:
            form = CameraForm(request.POST, request.FILES, instance=camera)
            camera = form.save()
            camera.review() # Review the image with sightengine
        except:
            print(traceback.format_exc())
        return HttpResponse(status=200)
    return render(request, 'app/video.html', {'title': 'Video', 'form': CameraForm()})
# The forms
# app/forms.py
from django import forms
from app.models import Camera

class CameraForm(forms.ModelForm):
    def __init__(self, *args, **kwargs):
        super(CameraForm, self).__init__(*args, **kwargs)
    class Meta:
        model = Camera
        fields = ('frame',)
<!-- The template -->
<!-- templates/video.html -->
{% extends 'base.html' %}
{% block content %}
<div id="container">
<video autoplay="true" muted="true" id="video-element" width="100%"></video>
<form method="POST" enctype="multipart/form-data" id="live-form" style="position: absolute; display: none; visibility: hidden;">
{{ form }}
</form>
</div>
{% endblock %}
// The javascript
// templates/video.js
var form = document.getElementById('live-form');
var scale = 0.2;
var width = 1920 * scale;
var height = 1070 * scale
var video = document.getElementById('video-element');
var data;
var mediaRecorder;
var mediaChunks = [];
const VIDEO_INTERVAL = 5000; // The length of each packet to send, ideally more than 5000 ms (5 seconds)
function capture() {
    mediaRecorder.stop(); // Stop to recod data
}
const clone = (items) => items.map(item => Array.isArray(item) ? clone(item) : item);
function startup() {
    navigator.mediaDevices.getUserMedia({
            video: {
                width: {
                    ideal: width
                },
                height: {
                    ideal: height
                }
            },
            audio: true
        })
        .then(function(stream) {
            video.srcObject = stream;
            video.play();
            mediaRecorder = new MediaRecorder(stream);
            mediaRecorder.addEventListener("dataavailable", event => {
                mediaChunks.push(event.data);
                var mediaData = clone(mediaChunks);
                var file = new Blob(mediaData, {
                    'type': 'video/webm'
                });
                mediaChunks = [];
                mediaRecorder.start();
                var formdata = new FormData(form);
                formdata.append('frame', new File([file], 'frame.webm'));
                $.ajax({
                    url: window.location.href,
                    type: "POST",
                    data: formdata,
                    processData: false,
                    contentType: false,
                }).done(function(respond) {
                    console.log(respond);
                    console.log("Sent frame");
                });
            });
            setTimeout(function() {
                setInterval(capture, VIDEO_INTERVAL);
            }, 5000);
            mediaRecorder.start();
        }).catch(function(err) {
            console.log("An error occurred: " + err);
        });
}
startup();
This is all it takes to upload a WebM video from your webcam. Django sites are ideal for this, as they support large objects and can index them easily. Please be cautious with this however, and do use APIs to make sure your uploaded content is safe. I use an API from SightEngine.com which contains a workflow to remove video I don't want on my site. This is what it looks like:
# The API call
# app/apis.py
import requests
import json

params = {
  'workflow': 'wfl_00000000000000000US',
  'api_user': '000000000',
  'api_secret': '000000000000000000'
}

def is_safe(video_path):
    files = {'media': open(video_path, 'rb')}
    r = requests.post('https://api.sightengine.com/1.0/video/check-workflow-sync.json', files=files, data=params)
    output = json.loads(r.text)
    if output['status'] == 'failure' or output['summary']['action'] == 'reject':
        return False
    return True
The next part is a save call in the models.py.
# And the models.py review call
# app/models.py
import os
from .apis import is_safe
...
    def review(self):
        if self.frame and not is_safe(self.frame.path):
            os.remove(self.frame.path)
            self.frame = None
            self.save()
Creating a workflow on SightEngine allows you to filter out offensive content, celebrities, children, and even alcohol or drugs. This keeps sites safer when uploading videos. I also recommend using facial recognition in order to verify which users are uploading what content. This is important when keeping records of access for verification. How much does it cost? Running a server that can cache video can be expensive if you have a lot of video to cache, but experimenting is quite inexpensive, less than $10 a month for the server. The API, SightEngine is free for 500 API calls per day and 2000 per month, but this means only about 42 minutes of video per day with 5-second video segments. It is still worthwhile to keep your site secure, as, at $29 per month, you get 10,000 API calls running about 833 hours or 14 full days (28 12-hour days). I hope this code is useful to you. I appreciate your feedback if you are willing to comment or like, you can log in with your face!


@Jasper_Holton's photo de profil

How to identify and recognize faces using python with no APIs I use the below code to implement a login with face function on Uglek. The code works by assigning a user a face ID when they upload a face to their profile or go to log in, and then retrieving their account by image using the face ID. Here is the code

# face/face.py
from django.contrib.auth.models import User
import uuid
from .models import Face
import face_recognition

NUM_FACES = 9

def get_face_id(image_path):
    image = face_recognition.load_image_file(image_path)
    face_locations = face_recognition.face_locations(image)
    if len(face_locations) > 1 or len(face_locations) < 1:
        return False

    for user in User.objects.filter(profile__enable_facial_recognition=True):
        known_image = face_recognition.load_image_file(user.profile.face.path)
        unknown_image = image
        user_encoding = face_recognition.face_encodings(known_image)[0]
        user_encodings = list()
        user_encodings.append(user_encoding)
        user_faces = Face.objects.filter(user=user).order_by('-timestamp')
        for face in user_faces:
            if open(face.image.path,"rb").read() == open(image_path,"rb").read():
                return False
        if user_faces.count() > NUM_FACES:
            user_faces = user_faces[:NUM_FACES]
        for face in user_faces:
            image = face_recognition.load_image_file(face.image.path)
            image_encoding = face_recognition.face_encodings(image)[0]
            user_encodings.append(image_encoding)
        unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
        results = face_recognition.compare_faces(user_encodings, unknown_encoding)
        if results[0]:
            return user.profile.uuid
    return str(uuid.uuid4())
In action, the code looks like get_face_id(User.objects.get(id=1).face.path) in testing. This gets my face ID from the face uploaded to my profile. To get a face ID of a logging in user, I save a face form with a face object and then call get_face_id(face.image.path) to query the user instance and redirect to their authentication URL. This works well. I hope this is useful to you. For more information, see the GitHub below: github.com/ageitgey/face_recognition


@Jasper_Holton's photo de profil

How to Identify Unique Faces with the Microsoft Azure Face API Using the Microsoft Azure Face API, you can assign unique faces a UUID and identify them for use in login, verification, or any other purpose. The following code accepts an image of a single face and returns a unique UUID representing that face. This has a huge application potential in internet security and could make some sites and businesses much more secure, by uniquely attributing faces to profiles within the apps or security solutions. Using the face API with Microsoft Azure is free for basic use, and isn't expensive otherwise. To install python modules for this code, run $ pip install --upgrade azure-cognitiveservices-vision-face $ pip install --upgrade Pillow The code is as follows.

# face/face.py
import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, QualityForRecognition
import json

# This key will serve all examples in this document.
KEY = "000000000000000000000000000000"
# This endpoint will be used in all examples in this quickstart.
ENDPOINT = "https://endpoint.api.cognitive.microsoft.com/"

PERSON_GROUP_ID = str("group") # assign a random ID (or name it anything)

def get_face_id(single_face_image_url):
    # Create an authenticated FaceClient.
    face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
    # Detect a face in an image that contains a single face
    single_image_name = os.path.basename(single_face_image_url)
    # We use detection model 3 to get better performance.
    face_ids = []
    # We use detection model 3 to get better performance, recognition model 4 to support quality for recognition attribute.
    faces = face_client.face.detect_with_url(single_face_image_url, detection_model='detection_03') #, recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
    # Remove this line after initial call with first face (or you will get an error on the next call)
    face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)

    for face in faces: # Add faces in the photo to a list
        face_ids.append(face.face_id)

    if len(faces) > 1: # Return if there are too many faces
        return False

    results = None
    try:
        results = face_client.face.identify(face_ids, PERSON_GROUP_ID) # Identify the face
    except:
        results = None
    if not results: # Add the face if they are not identified
        p = face_client.person_group_person.create(PERSON_GROUP_ID, uuid.uuid4()) # Identify them with a UUID
        face_client.person_group_person.add_face_from_url(PERSON_GROUP_ID, p.person_id, single_face_image_url)
        face_client.person_group.train(PERSON_GROUP_ID) # Training
        while (True):
            training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
            print("Training status: {}.".format(training_status.status))
            print()
            if (training_status.status is TrainingStatusType.succeeded):
                break
            elif (training_status.status is TrainingStatusType.failed):
                sys.exit('Training the person group has failed.')
            time.sleep(5)
        results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
    if results and len(results) > 0: # Load their UUID
        res = json.loads(str(results[0].candidates[0]).replace('\'',"\""))['person_id']
        print(res)
        return res # Return their UUID
    return False # Or return false to indicate that no face was recognized.

f = 'uglek.com/media/face/1b195bf5-8150-4f84-931d-ef0f2a464d06.png'
print(get_face_id(f)) # Identify a face from this image
Using this code, you can call get_face_id(face_url) to get an ID from any face. Your face ID will be unique to each user, so you can cache it on a profile and use it to retrieve a profile. This is the way the "Login with your face" option works on Uglek. I hope you enjoy this code, and it is useful to you. Feel free to use it as you will, but be sure to install your own API keys from Azure.com. Thank you!


@Jasper_Holton's photo de profil

How to Generate a String from a Number in Python I use the following code to generate a string from a number under 1000. It is using simple arrays and if statements to generate a compound number as a string.


import math
n = ['one','two','three','four','five', 'six', 'seven', 'eight', 'nine', 'ten']
tn = ['eleven','twelve','thir','four','fif','six','seven','eigh','nine']
nn = ['ten','twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety']
def number_to_string(num):
    if not isinstance(num, int):
        num = int(num) if num != '' else 'done'
    if num == 'done':
        return ''
    if num == 0:
        return ''
    if num < 11:
        return n[num-1]
    if num < 20:
        if num < 13:
            return tn[num-11]
        return tn[num-11] + 'teen'
    if num < 100:
        extra = '-'+n[num%10-1]
        if num%10 == 0:
            extra = ''
        return nn[math.floor(num/10)-1]+extra
    if num < 1000:
        extra = '-'+n[num%10-1]
        if num%10 == 0:
            extra = ''
        snum = str(num)
        return n[math.floor(num/100)-1]+'-hundred'+ ('-' if number_to_string(int(snum[1:])) != '' else '') + number_to_string(int(snum[1:]))
    if num < 10000:
        snum = str(num)
        return number_to_string(int(snum[:1])) + '-thousand' + ('-' if number_to_string(int(snum[1:])) != '' else '') +number_to_string(int(snum[1:]))
    if num < 100000:
        snum = str(num)
        return number_to_string(int(snum[:2])) + '-thousand' + ('-' if number_to_string(int(snum[2:])) != '' else '') + number_to_string(int(snum[2:]))
    if num < 1000000:
        snum = str(num)
        return number_to_string(snum[:len(snum) - 3]) + '-thousand' + ('-' if number_to_string(snum[len(snum)-3:]) != '' else '') + number_to_string(snum[len(snum)-3:])    
    if num < 1000000000:
        snum = str(num)
        return number_to_string(snum[:len(snum) - 6]) + '-million' + ('-' if number_to_string(snum[len(snum)-6:]) != '' else '') + number_to_string(snum[len(snum)-6:])
    return 'number too large to compute!'

#for x in range(1,100000):
#    print(number_to_string(x))
print(number_to_string(999999999))
This returns a compound string number, "nine-hundred-ninety-nine-million-nine-hundred-ninety-nine-thousand-nine-hundred-ninety-nine".


@Jasper_Holton's photo de profil

Un dessin JavaScript - Tasse à café a créé ce simple dessin avec du code aujourd'hui comme photo de produit pour les nouveaux boutons. C'est un dessin d'une tasse à café, réalisé en utilisant des ovales et des rectangles. Le code qui le dessine est ci-dessous.

function init() { 
 var stage = new createjs.Stage("coffee"); 
 var background = new createjs.Shape(); 
 var yoffset = 40 ; 
 background.graphics.beginFill("DeepSkyBlue").drawRect(0, 0, 500, 500); 
 stage.addChild(arrière-plan); 
 var cercle = new createjs.Shape(); 
 circle.graphics.beginFill("White").drawEllipse(10 + 300 + yoffset, 250 - 150, 120, 300); 
 stage.addChild(cercle); 
 var cercle3 = new createjs.Shape(); 
 circle3.graphics.beginFill("DeepSkyBlue").drawEllipse(370, 90 + yoffset, 70, 240); 
 stage.addChild(cercle3); 
 var mug = new createjs.Shape(); 
 mug.graphics.beginFill("White").drawRect(100, 60 + yoffset, 300, 300); 
 stage.addChild(tasse); 
 var cercle = new createjs.Shape(); 
 circle.graphics.beginFill("White").drawEllipse(250 - 150, 10 + yoffset, 300, 100); 
 stage.addChild(cercle); 
 var cercle2 = new createjs.Shape(); 
 circle2.graphics.beginFill("Brown").drawEllipse(250 - 130, 30 + yoffset, 260, 60); 
 stage.addChild(cercle2); 
 var cercle4 = new createjs.Shape(); 
 circle4.graphics.beginFill("Blanc").drawEllipse(250 - 150, 10 + 300 + ydécalage, 300, 100); 
 stage.addChild(cercle4); 
 stage.update(); 
} 

Voir la photo d'un post de @Jasper_Holton

@Jasper_Holton, comme ça,

@Jasper_Holton's photo de profil

Comment créer un thème dynamique de lecture facile basé sur le lever et le coucher du soleil Ce code me permet d'afficher automatiquement les pages en mode clair ou sombre (avec des styles clairs ou foncés) selon que le soleil est en haut. interroge les informations de localisation et de fuseau horaire à l'aide d'une API. C'est un excellent moyen de rendre un site plus agréable à regarder la nuit. Une page Web avec beaucoup d'espace blanc peut être un peu difficile à utiliser la nuit, il est donc préférable d'avoir un processeur de contexte qui rend le site plus facile à lire la nuit. *(python)*# app/context_processors.py import pytz from astral import LocationInfo from astral.sun import sun [= NEWLINE=]def context_processor(context_data) tz = request.user.profile. ] = False # Ou sinon, mettez-le en lumière return context_data # users/middleware.py def simple_middleware(get_response): # One -temps de configuration et d'initialisation. def middleware(request) : User = get_user_model() if request.user.is_authenticated and hasattr(request.user, 'profile') : user = get_object_or_404(User, pkBesole002request.user.pk) # Mettre à jour l'heure de la dernière visite après la fin du traitement de la demande. last_ip = request.user.profile.ip request.user.profile.ip = get_client_ip(request) if request.user.profile.ip != last_ip : [= NEWLINE=] request.user.profile.timezone = get_timezone(request.user.profile.


@Jasper_Holton's photo de profil

Un correctif audio pratique pour les iframes à l'aide de jQuery C'est ainsi que lit l'audio en pause dans le document avec les iframes chargés afin que l'audio ne soit pas lu plus d'une fois dans le document. Ce correctif modifie le site pour corriger la double lecture audio dans plusieurs iframes. Ce code est inclus dans chaque iframe et dans le document principal.

$(function() { 
 $("audio").on("play", function() { // Lorsque chaque audio est lu dans le document principal [= NEWLINE=] $("audio", window.parent.document).not(this).each(function(index, audio) { // Récupère chaque audio qui n'est pas celui-ci 
 audio.pause( ); // Mettez-le en pause 
 }); 
 lecture = ceci; // Enregistrez l'audio en cours de lecture 
 $("iframe", window.parent.document). each(function(index, iframe) { // Récupère tous les iframes du document parent 
 $(iframe).contents().find("audio").not(playing).each(function(index, audio ) { // Filtrer les audios qui ne devraient pas être lus (pas celui sur lequel nous avons cliqué) 
 audio.pause(); // Mettre en pause l'audio 
 }); 
 }) ; 
 }); 
} ); 
Ce code simple met en pause les éléments audio de mon site lorsqu'un nouveau est lu. Il peut être utilisé pour empêcher la lecture d'audios en double, et il fonctionne sur tous les audios et iframes afin qu'il puisse être utilisé dans n'importe quel document. Il doit être intégré dans le document parent et chaque iframe de la page défilante. each(function(index, audio) { // Filtrer les audios qui ne devraient pas être lus (pas celui sur lequel nous avons cliqué) audio.pause(); // Mettre l'audio en pause }); }); }); } ); *** Ce code simple met en pause les éléments audio de mon site lorsqu'un nouveau est lu. Il peut être utilisé pour empêcher la lecture d'audios en double, et il fonctionne sur tous les audios et iframes afin qu'il puisse être utilisé dans n'importe quel document. Il doit être intégré dans le document parent et chaque iframe de la page défilante. each(function(index, audio) { // Filtrer les audios qui ne devraient pas être lus (pas celui sur lequel nous avons cliqué) audio.pause(); // Mettre l'audio en pause }); }); }); } ); *** Ce code simple met en pause les éléments audio de mon site lorsqu'un nouveau est lu. Il peut être utilisé pour empêcher la lecture d'audios en double, et il fonctionne sur tous les audios et iframes afin qu'il puisse être utilisé dans n'importe quel document. Il doit être intégré dans le document parent et chaque iframe de la page défilante. *** Ce code simple met en pause les éléments audio de mon site lorsqu'un nouveau est lu. Il peut être utilisé pour empêcher la lecture d'audios en double, et il fonctionne sur tous les audios et iframes afin qu'il puisse être utilisé dans n'importe quel document. Il doit être intégré dans le document parent et chaque iframe de la page défilante. *** Ce code simple met en pause les éléments audio de mon site lorsqu'un nouveau est lu. Il peut être utilisé pour empêcher la lecture d'audios en double, et il fonctionne sur tous les audios et iframes afin qu'il puisse être utilisé dans n'importe quel document. Il doit être intégré dans le document parent et chaque iframe de la page défilante.


@Jasper_Holton, comme ça,

@Jasper_Holton's photo de profil

Gestion détaillée des erreurs avec l'intergiciel Django Il s'agit d'un moyen simple de gérer les erreurs de manière détaillée à l'aide de l'intergiciel Django. À l'aide de ce middleware, vous pouvez afficher vos traces d'erreur sur des pages HTML personnalisées, au lieu d'utiliser les pages d'erreur du mode de débogage de Django. Voici comment fonctionne le code. Tout d'abord, un middleware pour obtenir l'erreur actuelle dans la vue du gestionnaire d'erreurs.

# app/middleware.py 
from threading import local 
import traceback 
from django.utils.deprecation import MiddlewareMixin 
 
_error = local() # Stocker l'erreur dans une 
 
classe locale ExceptionVerboseMiddleware(MiddlewareMixin) : 
 def process_exception(self, request, exception) : # Processus l'exception 
 _error.value = traceback. format_exc() # Stocke la trace de la pile depuis traceback 
 
def get_current_exception() : # Renvoie l'erreur 
 essai : 
 renvoie _error.value 
 sauf AttributeError : 
 return None 
Dans les vues, ajoutez un appel au middleware pour obtenir l'exception.
# app/views.py 
def handler500(demande) : 
 data = {'title':'Error 500', 'error' : get_current_exception( )} # Mettez l'erreur dans le contexte, afin que nous puissions la restituer au modèle. 
 return render(request,'blog/500.html', data) 
Incluez ce middleware dans votre fichier settings.py.
# project/settings.py 
MIDDLEWARE = ​​[ 
 '...', 
 'app. middleware.ExceptionVerboseMiddleware', 
 '...' 
] 
Et enfin, ajoutez cette ligne à vos projets urls.py [=NEWLINE= ]
# project/urls.py 
handler500 = 'blog.views.handler500' 
Maintenant, il vous suffit d'ajouter une balise,
{{ error }}
, pour afficher votre erreur sur la page d'erreur 500. C'est tout ce qu'il faut pour configurer une page détaillée de gestion des erreurs dans Django.