Sar-wang 小码站

点滴积累,方成大器

0%

Android Arcore 简单的实现人脸增强,人脸识别,加遮照,精确单目测距计算屏幕到人的距离

本人博客转载去标明原文

前言

午后略困,倒杯咖啡,把之前挖的坑补上,今天来说一说arcore,arcore是google提供的一个增强现实的服务
,该服务的目的是做相机增强现实,,,ar,然而里面有个人脸增强的模块,可以我们用来
实现人脸识别,和人脸增强
ARCore官网
gif

jpg
图片可以看到效果,arcore识别人脸,建模3d,以鼻子后面位置为脸部中心点,
而相机的位置则是模型的宇宙中心,给人脸打上468个点,精确贴合

注意

该服务虽然好用,功能强大,但是对于我大天朝来说,需要点要求
1。minSdk 版本24,也就是最少要7.0以上才支持
2。需要有支持ar的硬件,
3。需要有ARCore的服务,如果没有可以下载和升级,有些市场有有些可能需要VPN
楼主测试使用的是小米8测试机
好了下面就开始使用吧

使用

在项目的build.gradle中确保repositories中有google在

1
2
3
repositories {
google()
}

然后在app的build中引用arcore 我这里用的是1.15.0版本,

1
2
3
4
5
6
implementation 'com.google.ar:core:1.15.0'
// Provides ArFragment, and other UX resources.
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'

// Alternatively, use ArSceneView without the UX dependency.
implementation 'com.google.ar.sceneform:core:1.15.0'

扩展ArFragment

要实现人脸增强,我们需要扩展Arfragment更改相机,以及session
具体代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
/**
public class FaceArFragment extends ArFragment {

@Override
protected Config getSessionConfiguration(Session session) {
Config config = new Config(session);
config.setAugmentedFaceMode(AugmentedFaceMode.MESH3D);
return config;
}

@Override
protected Set<Session.Feature> getSessionFeatures() {
return EnumSet.of(Session.Feature.FRONT_CAMERA);
}

@Override
protected void handleSessionException(UnavailableException sessionException) {
String message;
if (sessionException instanceof UnavailableArcoreNotInstalledException) {
message = "请安装ARCore";
} else if (sessionException instanceof UnavailableApkTooOldException) {
message = "请升级ARCore";
} else if (sessionException instanceof UnavailableSdkTooOldException) {
message = "请升级app";
} else if (sessionException instanceof UnavailableDeviceNotCompatibleException) {
message = "当前设备部不支持AR";
} else {
message = "未能创建AR会话,请查看机型适配,arcore版本与系统版本";
String var3 = String.valueOf(sessionException);
}
Toast.makeText(getContext(),"==" + message,Toast.LENGTH_LONG).show();
}


/**
* Override to turn off planeDiscoveryController. Plane trackables are not supported with the
* front camera.
*/
@Override
public View onCreateView(
LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
FrameLayout frameLayout =
(FrameLayout) super.onCreateView(inflater, container, savedInstanceState);

getPlaneDiscoveryController().hide();
getPlaneDiscoveryController().setInstructionView(null);

return frameLayout;
}
}

然后自己的布局:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".AugmentedFacesActivity">

<fragment android:name="com.google.ar.sceneform.samples.augmentedfaces.FaceArFragment"
android:id="@+id/face_fragment"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<TextView
android:id="@+id/mTv"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/app_name"/>

</FrameLayout>

设置人脸face—mesh—ModelRenderable和faceMeshTexture

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
  private ModelRenderable faceRegionsRenderable;
private Texture faceMeshTexture;
ModelRenderable.builder()
.setSource(this, R.raw.fox_face)
.build()
.thenAccept(
modelRenderable -> {
faceRegionsRenderable = modelRenderable;
modelRenderable.setShadowCaster(false);
modelRenderable.setShadowReceiver(false);
});

// Load the face mesh texture.
Texture.builder()
.setSource(this, R.drawable.fox_face_mesh_texture)
.build()
.thenAccept(texture -> faceMeshTexture = texture);

ArSceneView sceneView = arFragment.getArSceneView();

然后对Arfragment的代码 其中包括算精确距离的方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
sceneView.setCameraStreamRenderPriority(Renderable.RENDER_PRIORITY_FIRST);

Scene scene = sceneView.getScene();

scene.addOnUpdateListener(
(FrameTime frameTime) -> {
if (faceRegionsRenderable == null || faceMeshTexture == null) {
return;
}

Collection<AugmentedFace> faceList =
sceneView.getSession().getAllTrackables(AugmentedFace.class);
// Make new AugmentedFaceNodes for any new faces.
for (AugmentedFace face : faceList) {
if (!faceNodeMap.containsKey(face)) {
AugmentedFaceNode faceNode = new AugmentedFaceNode(face);

faceNode.setParent(scene);
faceNode.setFaceRegionsRenderable(faceRegionsRenderable);
faceNode.setFaceMeshTexture(faceMeshTexture);
faceNodeMap.put(face, faceNode);
}
}

// Remove any AugmentedFaceNodes associated with an AugmentedFace that stopped tracking.
Iterator<Map.Entry<AugmentedFace, AugmentedFaceNode>> iter =
faceNodeMap.entrySet().iterator();
while (iter.hasNext()) {
Map.Entry<AugmentedFace, AugmentedFaceNode> entry = iter.next();
AugmentedFace face = entry.getKey();


Pose left = face.getRegionPose(AugmentedFace.RegionType.FOREHEAD_LEFT);
Pose right = face.getRegionPose(AugmentedFace.RegionType.FOREHEAD_RIGHT);
// face.getm
// AugmentedFace node
// face.createAnchor(left);
// face.createAnchor(right);
float lx = left.tx();
float ly = left.ty();
float lz = left.tz();
float rx = right.tx();
float ry = right.ty();
float rz = right.tz();
double llength = Math.sqrt(lx * lx + ly * ly + lz * lz);
double rlength = Math.sqrt(rx * rx + ry * ry + rz * rz);
BigDecimal b1 = new BigDecimal(llength);
BigDecimal r1 = new BigDecimal(rlength);
double spec = b1.add(r1).divide(new BigDecimal("2")).multiply(new BigDecimal("100")).floatValue();
Log.d("wzz","-----" + llength + "----" + rlength);
Log.d("wzz","-----" + b1.add(r1).divide(new BigDecimal("2")));
Log.d("wzz","-----" + decimalFormat.format((b1.add(r1).divide(new BigDecimal("2")))) + "m");
mTv.setText("到屏幕距离: " + decimalFormat.format(spec) + "cm");

if (face.getTrackingState() == TrackingState.STOPPED) {
drawLine(face.createAnchor(left),face.createAnchor(right));
AugmentedFaceNode faceNode = entry.getValue();
faceNode.setParent(null);
iter.remove();
}
}
});

好了到这就可以实现了,是不是贼简单,
关于人脸识别的坑,基本上都完结了,(pass opencv 2d to 3d)
有任何问题欢迎评论,讨论

-------------本文结束感谢您的阅读-------------