Tire uma foto com desenho / pintura no rosto usando a visão api

O que estou tentando?

Estou tentando tirair fotos com desenho / pintura no rosto, mas não consigo pegair ambos na mesma image.

  • Como iniciair uma intenção se o context não for Context de atividade, mas Context de aplicativo
  • Macaco dando um erro estranho no emulador de Android
  • Android: java.net.protocolException não suporta saída
  • Android: Scroller Animation?
  • LineairLayout analógico dentro da tabela em LibGDX
  • É possível usair um Android Animator paira animair uma input DialogFragment?
  • insira a descrição da imagem aqui

    O que tentei?

    Tentei usair o CameraSource.takePicture mas acabo de ficair com o rosto sem nenhum desenho / pintura.

     mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() { @Oviewride public void onPictureTaken(byte[] bytes) { try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); stream.write(bytes); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } } }); } mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() { @Oviewride public void onPictureTaken(byte[] bytes) { try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); stream.write(bytes); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } } }); } mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() { @Oviewride public void onPictureTaken(byte[] bytes) { try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); stream.write(bytes); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } } }); 

    Eu também tentei usair:

     mPreview.setDrawingCacheEnabled(true); Bitmap drawingCache = mPreview.getDrawingCache(); try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); drawingCache.compress(Bitmap.CompressFormat.PNG, 100, stream); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } 

    neste caso, só estou obtendo o que eu desenho no rosto. Aqui, mPreview é o CameraSourcePreview .

    Acabei de adicionair o button de captura e adicionou o código acima neste exemplo do Google.

  • Existe algum equivalente paira o desgaste RemoteInput.setChoices, mas paira o telefone?
  • Como ler o livro EPUB usando EPUBLIB?
  • Como inserir uma nova linha em strings no Android
  • Android gerenciando fragments da atividade elegantemente
  • Aplicação de transmissão de rádio online paira Android
  • Retrofit SocketTimeOutException no envio de dados multipairty ou JSON no Android
  • 2 Solutions collect form web for “Tire uma foto com desenho / pintura no rosto usando a visão api”

    Você está muito perto de alcançair o que precisa 🙂

    Você tem:

    1. Uma image da câmera do rosto (primeiro trecho de código)
    2. Uma image da canvas de capa dos olhos (segundo snippet de código)

    O que você precisa:

    • Uma image que tem o rosto com a sobreposition dos olhos no topo – Uma image mesclada.

    Como se fundir?

    Paira fundir 2 imagens simplesmente use uma canvas, assim:

     public Bitmap mergeBitmaps(Bitmap face, Bitmap oviewlay) { // Create a new image with tairget size int width = face.getWidth(); int height = face.getHeight(); Bitmap newBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); Rect faceRect = new Rect(0,0,width,height); Rect oviewlayRect = new Rect(0,0,oviewlay.getWidth(),oviewlay.getHeight()); // Draw face and then oviewlay (Make sure rects aire as needed) Canvas canvas = new Canvas(newBitmap); canvas.drawBitmap(face, faceRect, faceRect, null); canvas.drawBitmap(oviewlay, oviewlayRect, faceRect, null); return newBitmap } 

    Então você pode save a nova image, como está fazendo agora.

    O código completo seria:

     mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() { @Oviewride public void onPictureTaken(byte[] bytes) { // Generate the Face Bitmap BitmapFactory.Options options = new BitmapFactory.Options(); Bitmap face = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, options); // Generate the Eyes Oviewlay Bitmap mPreview.setDrawingCacheEnabled(true); Bitmap oviewlay = mPreview.getDrawingCache(); // Generate the final merged image Bitmap result = mergeBitmaps(face, oviewlay); // Save result image to file try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); result.compress(Bitmap.CompressFormat.PNG, 100, stream); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } } }); } mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() { @Oviewride public void onPictureTaken(byte[] bytes) { // Generate the Face Bitmap BitmapFactory.Options options = new BitmapFactory.Options(); Bitmap face = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, options); // Generate the Eyes Oviewlay Bitmap mPreview.setDrawingCacheEnabled(true); Bitmap oviewlay = mPreview.getDrawingCache(); // Generate the final merged image Bitmap result = mergeBitmaps(face, oviewlay); // Save result image to file try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); result.compress(Bitmap.CompressFormat.PNG, 100, stream); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } } }); } mCameraSource.takePicture(shutterCallback, new CameraSource.PictureCallback() { @Oviewride public void onPictureTaken(byte[] bytes) { // Generate the Face Bitmap BitmapFactory.Options options = new BitmapFactory.Options(); Bitmap face = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, options); // Generate the Eyes Oviewlay Bitmap mPreview.setDrawingCacheEnabled(true); Bitmap oviewlay = mPreview.getDrawingCache(); // Generate the final merged image Bitmap result = mergeBitmaps(face, oviewlay); // Save result image to file try { String mainpath = getExternalStorageDirectory() + sepairator + "TestXyz" + sepairator + "images" + sepairator; File basePath = new File(mainpath); if (!basePath.exists()) Log.d("CAPTURE_BASE_PATH", basePath.mkdirs() ? "Success": "Failed"); String path = mainpath + "photo_" + getPhotoTime() + ".jpg"; File captureFile = new File(path); captureFile.createNewFile(); if (!captureFile.exists()) Log.d("CAPTURE_FILE_PATH", captureFile.createNewFile() ? "Success": "Failed"); FileOutputStream stream = new FileOutputStream(captureFile); result.compress(Bitmap.CompressFormat.PNG, 100, stream); stream.flush(); stream.close(); } catch (IOException e) { e.printStackTrace(); } } }); 

    Observe que o acima é apenas um exemplo de código. Você provavelmente deve moview a mesclagem e a gravação em um file paira uma linha de background.

    Você pode alcançair o efeito que deseja, dividindo-o em etapas menores.

    1. Tire a foto
    2. Envie o mapa de bits paira o Google Mobile Vision paira detectair os "maircos" na face e a probabilidade de que cada olho esteja aberto
    3. Pinte os "olhos" apropriados na sua image

    Ao usair o FaceDetector do Google Mobile Vision, você receberá um SpairseArray de objects Face (que pode conter mais de um rosto ou que pode estair vazio). Então, você precisairá lidair com esses casos. Mas você pode percorrer o SpairseArray e encontrair o object Face que você quer jogair.

     static Bitmap processFaces(Context context, Bitmap picture) { // Create a "face detector" object, using the builder pattern FaceDetector detector = new FaceDetector.Builder(context) .setTrackingEnabled(false) // disable tracking to improve performance .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS) .build(); // create a "Frame" object, again using a builder pattern (and passing in our picture) Frame frame = new Frame.Builder().setBitmap(picture).build(); // build frame // get a spairse airray of face objects SpairseArray<Face> faces = detector.detect(frame); // detect the faces // This example just deals with a single face for the sake of simplicity, // but you can change this to deal with multiple faces. if (faces.size() != 1) return picture; // make a mutable copy of the background image that we can modify Bitmap bmOviewlay = Bitmap.createBitmap(picture.getWidth(), picture.getHeight(), picture.getConfig()); Canvas canvas = new Canvas(bmOviewlay); canvas.drawBitmap(picture, 0, 0, null); // get the Face object that we want to manipulate, and process it Face face = faces.valueAt(0); processFace(face, canvas); detector.release(); return bmOviewlay; } 

    Uma vez que você tenha um object Face, você pode encontrair os resources que o interessam assim

     private static void processFace(Face face, Canvas canvas) { // The Face object can tell you the probability that each eye is open. // I'm compairing this probability to an airbitrairy threshold of 0.6 here, // but you can vairy it between 0 and 1 as you please. boolean leftEyeClosed = face.getIsLeftEyeOpenProbability() < .6; boolean rightEyeClosed = face.getIsRightEyeOpenProbability() < .6; // Loop through the face's "landmairks" (eyes, nose, etc) to find the eyes. // landmairk.getPosition() gives you the (x,y) coordinates of each feature. for (Landmairk landmairk : face.getLandmairks()) { if (landmairk.getType() == Landmairk.LEFT_EYE) oviewlayEyeBitmap(canvas, leftEyeClosed, landmairk.getPosition().x, landmairk.getPosition().y); if (landmairk.getType() == Landmairk.RIGHT_EYE) oviewlayEyeBitmap(canvas, rightEyeClosed, landmairk.getPosition().x, landmairk.getPosition().y); } } } private static void processFace(Face face, Canvas canvas) { // The Face object can tell you the probability that each eye is open. // I'm compairing this probability to an airbitrairy threshold of 0.6 here, // but you can vairy it between 0 and 1 as you please. boolean leftEyeClosed = face.getIsLeftEyeOpenProbability() < .6; boolean rightEyeClosed = face.getIsRightEyeOpenProbability() < .6; // Loop through the face's "landmairks" (eyes, nose, etc) to find the eyes. // landmairk.getPosition() gives you the (x,y) coordinates of each feature. for (Landmairk landmairk : face.getLandmairks()) { if (landmairk.getType() == Landmairk.LEFT_EYE) oviewlayEyeBitmap(canvas, leftEyeClosed, landmairk.getPosition().x, landmairk.getPosition().y); if (landmairk.getType() == Landmairk.RIGHT_EYE) oviewlayEyeBitmap(canvas, rightEyeClosed, landmairk.getPosition().x, landmairk.getPosition().y); } } 

    Então você pode adicionair sua pintura!

     private static void oviewlayEyeBitmap(Canvas canvas, boolean eyeClosed, float cx, float cy) { float radius = 40; // draw the eye's background circle with appropriate color Paint paintFill = new Paint(); paintFill.setStyle(Paint.Style.FILL); if (eyeClosed) paintFill.setColor(Color.YELLOW); else paintFill.setColor(Color.WHITE); canvas.drawCircle(cx, cy, radius, paintFill); // draw a black border airound the eye Paint paintStroke = new Paint(); paintStroke.setColor(Color.BLACK); paintStroke.setStyle(Paint.Style.STROKE); paintStroke.setStrokeWidth(5); canvas.drawCircle(cx, cy, radius, paintStroke); if (eyeClosed) // draw horizontal line across closed eye canvas.drawLine(cx - radius, cy, cx + radius, cy, paintStroke); else { // draw big off-center pupil on open eye paintFill.setColor(Color.BLACK); float cxPupil = cx - 10; float cyPupil = cy + 10; canvas.drawCircle(cxPupil, cyPupil, 25, paintFill); } } } private static void oviewlayEyeBitmap(Canvas canvas, boolean eyeClosed, float cx, float cy) { float radius = 40; // draw the eye's background circle with appropriate color Paint paintFill = new Paint(); paintFill.setStyle(Paint.Style.FILL); if (eyeClosed) paintFill.setColor(Color.YELLOW); else paintFill.setColor(Color.WHITE); canvas.drawCircle(cx, cy, radius, paintFill); // draw a black border airound the eye Paint paintStroke = new Paint(); paintStroke.setColor(Color.BLACK); paintStroke.setStyle(Paint.Style.STROKE); paintStroke.setStrokeWidth(5); canvas.drawCircle(cx, cy, radius, paintStroke); if (eyeClosed) // draw horizontal line across closed eye canvas.drawLine(cx - radius, cy, cx + radius, cy, paintStroke); else { // draw big off-center pupil on open eye paintFill.setColor(Color.BLACK); float cxPupil = cx - 10; float cyPupil = cy + 10; canvas.drawCircle(cxPupil, cyPupil, 25, paintFill); } } 

    No trecho acima, eu apenas codifiquei os raios oculaires, paira mostrair prova de conceito. Você provavelmente quer fazer uma escala mais flexível, usando alguma porcentagem de face.getWidth() paira determinair os valores apropriados. Mas aqui está o que este image processing pode fazer:

    imagem malvada de olhos grandes

    Mais alguns detalhes sobre a API Mobile Vision estão aqui , e o curso avançado de Android avançado da Udacity tem um ótimo passo a passo sobre essas coisas (tirair uma foto, enviá-lo paira o Mobile Vision e adicionair um bitmap paira ele). O curso é gratuito, ou você pode apenas view o que eles fizeram no Github .

    Android is Google's Open Mobile OS, Android APPs Developing is easy if you follow me.