全部产品

iOS远程双录使用文档

更新时间:2020-07-14 12:34:30

创建项目工程

创建使用Xcode创建一个新的项目。

配置步骤

1. 下载SDK

将IDRSSDK.framework文件目录中的Resources.bundle拷贝一份出来,将IDRSSDK.framework和Resources.bundle一起,复制一份到您的app文件夹下

拖拽集成

2. 配置参数

a. 将IDRSSDK.framework动态库添加进来,同时添加其他必要的库文件,如图所示:

依赖b. 将Resources.bundle资源添加进来,如图所示:

依赖c. 设置Other Linker Flags为-ObjC,如图所示:

依赖d. 增加支持http协议允许,如图所示:

依赖e. 允许使用图片库、相机、麦克风、照片库和照片权限

依赖

iOS端流程图

流程图

iOS端类图

主要类介绍

介绍
RTCSampleChatViewController 主要负责界面布局,操作流程,RTC SDK的初始化及实现,IDRSSDKSDK的初始化及实现
idrs_NetWorking 网络及签名的封装
FaceDetectView 人脸框显示(人脸框绘图)
IDRSSDK.h AI检测

RTCSampleChatViewController主要方法介绍

方法 介绍
initRTCSDK 初始化RTC SDK
initIDRSSDKSDK 初始化IDRSSDK SDK
startPreview 本地视频预览(包含订阅视频流)
publish 本地视频推流
JoinChannel 正式加入会议(包含订阅音频流)
onSubscribeChangedNotify 远端订阅回调(获取远端视频流及远端窗口个数)
onCaptureVideoSample 本地视频流回调(需要检测的本地流信息在此处返回)
onRemoteVideoSample 远端视频流回调(需要检测的远端流信息在此处返回)
onAudioSampleCallback 音频流回调(订阅的是哪端,返回的就是哪端的音频流)

idrs_NetWorking主要的方法介绍

方法 介绍
HttpPutWithMethod put请求方法封装
HttpGetWithMethod get请求方法封装
HTTPWithMethod pop请求封装(包含签名)

FaceDetectView主要的方法介绍

方法 介绍
drawRect 边框绘制(包含显示位置)
setDetectResult 人脸信息处理

IDRSSDK.h主要的方法介绍

方法 介绍
detectFace 检测人脸特征值
detectIDCard 检测身份证
detectHandGesture 检测动态手势
detectHandStaticGesture 检测静态手势
faceRecognitionSimilarity 人照对比
faceTrackFromImage 检测图片中的人脸
faceTrackFromVideo 检测视频流中的人脸
faceTrackFromRemoteVideo 检测RTC远端视频流中的人脸
startDialog 开启激活词检测
stopDialog 关闭激活词检测
feedAudioFrame 设置外部音频流

会议相关

1. 加入会议

角色介绍
  • 保险代理:负责创建会议,调用创建会议接口和加入会议接口。
  • 投保人和受益人:负责加入会议,只调用加入会议接口。
接口的调用

首先需要调用创建会议API接口(由保险代理调用)获取会议码,会议码通过API接口获取,接口如下:

  1. //具体代码位置:FaceSDKDemoFirstViewController.m
  2. //接口实现位置:idrs_NetWorking.m
  3. NSDictionary * userinfo = @{@"Action":@"CreateLive",@"AppId":@"ulw07lvw-1",@"Name":@"张三",@"UserId":self.uuid};
  4. //创建会议
  5. [idrs_NetWorking HTTPWithMethod:@"POST" body:userinfo success:^(id _Nonnull responseObject) {
  6. NSLog(@"成功:%@",responseObject);
  7. NSLog(@"\n\n-------\n\n%@",responseObject);
  8. NSDictionary * dic = [responseObject objectForKey:@"Data"];
  9. self.channel = [dic objectForKey:@"Channel"];
  10. dispatch_async(dispatch_get_main_queue(), ^{
  11. self.tipLable.text = [NSString stringWithFormat:@"视频会议码为:%@,请发送给客户,通过输入会议码加入远程双录",self.channel];
  12. });
  13. } failure:^(NSError * _Nonnull error) {
  14. NSLog(@"错误%@",error);
  15. }];

这个接口会返回一个会议码。

  • 接下来调用加入会议接口(所有角色都需要调用)并把取到的数据传到RTCSampleChatViewController:
  1. //代码位置:FaceSDKDemoFirstViewController.m
  2. -(void)JoinMettingWithChannel:(NSString*)channelName AndName:(NSString*)userNames{
  3. [RTCSampleUserAuthrization getPassportFromAppServer:channelName userName:userNames success:^(AliRtcAuthInfo *info, NSString *liveId) {
  4. dispatch_async(dispatch_get_main_queue(), ^{
  5. AppDelegate *appDelegate = (AppDelegate *)[UIApplication sharedApplication].delegate;
  6. appDelegate.chatVC = [appDelegate RTCChatView];
  7. appDelegate.chatVC.channelName = channelName;
  8. appDelegate.chatVC.manName = userNames;
  9. appDelegate.chatVC.info = info;
  10. appDelegate.chatVC.userId = info.user_id;
  11. appDelegate.chatVC.liveId = liveId;
  12. appDelegate.chatVC.audioCapture = true;
  13. appDelegate.chatVC.audioPlayer = true;
  14. [self.navigationController pushViewController:appDelegate.chatVC animated:YES];
  15. [self.bgScroll bringSubviewToFront:self.bgView];
  16. });
  17. } failure:^(NSError *error) {
  18. //错误信息处理
  19. }

以下代码调取了getPassportFromAppServer方法:

  1. //代码位置:RTCSampleUserAuthrization.m
  2. //接口实现位置:idrs_NetWorking.m
  3. + (void)getPassportFromAppServer:(NSString *)channelName userName:(NSString *)name success:(void (^)(AliRtcAuthInfo* info,NSString*liveId))success failure:(void (^)(NSError *error))failure{
  4. __block AliRtcAuthInfo * info = [[AliRtcAuthInfo alloc]init];;
  5. NSString * userId= [[NSUserDefaults standardUserDefaults] objectForKey:@"userId"];
  6. NSDictionary * dic = @{@"Action":@"JoinLive",@"Channel":channelName,@"UserId":userId};
  7. [idrs_NetWorking HTTPWithMethod:@"POST" body:dic success:^(id _Nonnull responseObject) {
  8. if (![responseObject[@"Code"] isEqualToString:@"OK"]) {
  9. failure(responseObject[@"Code"]);
  10. }else{
  11. NSMutableDictionary *loginDic = [[NSMutableDictionary alloc]init];
  12. NSDictionary *dataDic = responseObject[@"Data"][@"TokenData"];
  13. NSArray *keysArray = [dataDic allKeys];
  14. for (NSUInteger i = 0; i < keysArray.count; i++) {
  15. NSString *key = keysArray[i];
  16. NSString *value = dataDic[key];
  17. [loginDic setObject:value forKey:key];
  18. }
  19. info.channel = channelName;
  20. info.appid = loginDic[@"AppId"];
  21. info.nonce = loginDic[@"Nonce"];
  22. info.user_id = loginDic[@"UserId"];
  23. info.token = loginDic[@"Token"];
  24. info.timestamp = [loginDic[@"Timestamp"] longLongValue];
  25. info.gslb = loginDic[@"Gslb"];
  26. NSString* liveId = loginDic[@"LiveId"];
  27. success(info,liveId);
  28. }
  29. } failure:^(NSError * _Nonnull error) {
  30. failure(error);
  31. }];
  32. }

RTCSampleChatViewController是实现远程会议的主界面,接下来的所有操作都是在这个controller中实现的,首先看下初始化:

  1. - (void)initializeSDK{
  2. // 创建SDK实例,注册delegate,extras可以为空
  3. NSMutableDictionary *extrasDic = [[NSMutableDictionary alloc] init];
  4. [extrasDic setValue:@"ENGINE_BASIC_QUALITY_MODE" forKey:@"user_specified_engine_mode"];
  5. [extrasDic setValue:@"SCENE_MUSIC_MODE" forKey:@"user_specified_scene_mode"];
  6. NSError *parseError = nil;
  7. NSData *jsonData = [NSJSONSerialization dataWithJSONObject:extrasDic options:NSJSONWritingPrettyPrinted error:&parseError];
  8. NSString *extras = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
  9. _engine = [AliRtcEngine sharedInstance:self extras:extras];
  10. [_engine setSubscribeAudioSampleRate:AliRtcAudioSampleRate_16000];
  11. [self.engine setDeviceOrientationMode:(AliRtcOrientationModeAuto)];
  12. }

2. 预览界面的书写

这个上方大概介绍了一些流程,下面在此详细介绍一下:

预览本地
  1. - (void)startPreview{
  2. // 设置本地预览视频
  3. AliVideoCanvas *canvas = [[AliVideoCanvas alloc] init];
  4. AliRenderView *viewLocal = [[AliRenderView alloc] initWithFrame:CGRectMake(self.view.frame.origin.x, self.view.frame.origin.y, self.view.frame.size.height, self.view.frame.size.width)];
  5. canvas.view = viewLocal;
  6. canvas.renderMode = AliRtcRenderModeAuto;
  7. [self.view addSubview:viewLocal];
  8. [self.engine setLocalViewConfig:canvas forTrack:AliRtcVideoTrackCamera];
  9. // 开启本地预览
  10. [self.engine startPreview];
  11. [self.engine registerVideoSampleObserver];
  12. [self startPreview:nil];
  13. }

登陆服务器,并开始推流:

  1. - (void)startPreview:(UIButton *)sender {
  2. //设置自动(手动)模式
  3. [self.engine setAutoPublish:YES withAutoSubscribe:YES];
  4. [self JoinChannel:self.info :self.manName];//登陆
  5. //防止屏幕锁定
  6. [UIApplication sharedApplication].idleTimerDisabled = YES;
  7. //手动推流---登陆成功后调取
  8. //[self.engine configLocalCameraPublish:true];
  9. //[self.engine configLocalAudioPublish:true];
  10. //[self.engine publish:^(int errCode) {
  11. // if (errCode) {
  12. // //处理报错信息
  13. // }
  14. }];
  15. }

正式加入到会议之中

  • 根据之前传来的数据(个人信息及当前人物角色)加入频道:
  1. - (void)JoinChannel:(AliRtcAuthInfo*)authInfo :(NSString*)userName{
  2. //加入频道
  3. NSLog(@"%@",authInfo);
  4. [self.engine joinChannel:authInfo name:userName onResult:^(NSInteger errCode) {
  5. //加入频道回调处理
  6. NSLog(@"joinChannel result: %d", (int)errCode);
  7. dispatch_async(dispatch_get_main_queue(), ^{
  8. if (errCode != 0) {
  9. }
  10. [self runLoopGet];
  11. self.isJoinChannel = YES;
  12. });
  13. //订阅音频数据(本地)
  14. [self.engine subscribeAudioData:AliRtcAudiosourcePub];
  15. }];
  16. }
预览远端
  1. @implementation RTCRemoterUserView
  2. {
  3. AliRenderView *viewRemote;
  4. }
  5. - (instancetype)initWithFrame:(CGRect)frame {
  6. self = [super initWithFrame:frame];
  7. if (self) {
  8. //设置远端流界面
  9. CGRect rc = CGRectMake(0, 0, 320, 240);
  10. viewRemote = [[AliRenderView alloc] initWithFrame:rc];
  11. self.backgroundColor = [UIColor clearColor];
  12. CGRect rcc = CGRectMake(0, 0, 320, 240);
  13. _facedetect = [[FaceDetectView alloc] initWithFrame:rcc];
  14. _facedetect.isRemoteWindow = true;
  15. _facedetect.isRTC = false;
  16. [self addSubview:_facedetect];
  17. _handdetect = [[HandDetectView alloc] initWithFrame:rcc];
  18. _handdetect.isRemoteWindow = true;
  19. _handdetect.isRTC = false;
  20. [self addSubview:_handdetect];
  21. }
  22. return self;
  23. }
  24. - (void)updateUserRenderview:(AliRenderView *)view {
  25. view.backgroundColor = [UIColor clearColor];
  26. view.frame = viewRemote.frame;
  27. viewRemote = view;
  28. [self addSubview:viewRemote];
  29. [self bringSubviewToFront:_facedetect];
  30. [self bringSubviewToFront:_handdetect];
  31. }
  32. @end

CollectionView位置:

  1. rc.origin.x = 10;
  2. rc.origin.y = [UIApplication sharedApplication].statusBarFrame.size.height+64;
  3. rc.size = CGSizeMake(320, 240);
  4. UICollectionViewFlowLayout *flowLayout = [[UICollectionViewFlowLayout alloc] init];
  5. flowLayout.itemSize = CGSizeMake(320, 240);
  6. flowLayout.minimumLineSpacing = 10;
  7. flowLayout.minimumInteritemSpacing = 10;
  8. flowLayout.scrollDirection = UICollectionViewScrollDirectionHorizontal;
  9. self.remoteUserView = [[UICollectionView alloc] initWithFrame:CGRectZero collectionViewLayout:flowLayout];
  10. self.remoteUserView.frame = rc;
  11. self.remoteUserView.backgroundColor = [UIColor clearColor];
  12. self.remoteUserView.delegate = self;
  13. self.remoteUserView.dataSource = self;
  14. self.remoteUserView.showsHorizontalScrollIndicator = NO;
  15. [self.remoteUserView registerClass:[RTCRemoterUserView class] forCellWithReuseIdentifier:@"cell"];
  16. [self.view addSubview:self.remoteUserView];
  17. _remoteUserManager = [RTCSampleRemoteUserManager shareManager];

远端窗口监听:(如果远端窗口数量发生变化则调用此方法)

  1. - (void)onSubscribeChangedNotify:(NSString *)uid audioTrack:(AliRtcAudioTrack)audioTrack videoTrack:(AliRtcVideoTrack)videoTrack {
  2. //收到远端订阅回调
  3. dispatch_async(dispatch_get_main_queue(), ^{
  4. [self.remoteUserManager updateRemoteUser:uid forTrack:videoTrack];
  5. if (videoTrack == AliRtcVideoTrackCamera) {
  6. AliVideoCanvas *canvas = [[AliVideoCanvas alloc] init];
  7. canvas.renderMode = AliRtcRenderModeAuto;
  8. canvas.view = [self.remoteUserManager cameraView:uid];
  9. [self.engine setRemoteViewConfig:canvas uid:uid forTrack:AliRtcVideoTrackCamera];
  10. }else if (videoTrack == AliRtcVideoTrackScreen) {
  11. AliVideoCanvas *canvas2 = [[AliVideoCanvas alloc] init];
  12. canvas2.renderMode = AliRtcRenderModeAuto;
  13. canvas2.view = [self.remoteUserManager screenView:uid];
  14. [self.engine setRemoteViewConfig:canvas2 uid:uid forTrack:AliRtcVideoTrackScreen];
  15. }else if (videoTrack == AliRtcVideoTrackBoth) {
  16. AliVideoCanvas *canvas = [[AliVideoCanvas alloc] init];
  17. canvas.renderMode = AliRtcRenderModeAuto;
  18. canvas.view = [self.remoteUserManager cameraView:uid];
  19. [self.engine setRemoteViewConfig:canvas uid:uid forTrack:AliRtcVideoTrackCamera];
  20. AliVideoCanvas *canvas2 = [[AliVideoCanvas alloc] init];
  21. canvas2.renderMode = AliRtcRenderModeAuto;
  22. canvas2.view = [self.remoteUserManager screenView:uid];
  23. [self.engine setRemoteViewConfig:canvas2 uid:uid forTrack:AliRtcVideoTrackScreen];
  24. }
  25. [self.remoteUserView reloadData];
  26. });
  27. }

collectionView显示远端视频流:

  1. #pragma mark - uicollectionview delegate & datasource
  2. - (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section {
  3. return [self.remoteUserManager allOnlineUsers].count;
  4. }
  5. - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {
  6. RTCRemoterUserView *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"cell" forIndexPath:indexPath];
  7. RTCSampleRemoteUserModel *model = [self.remoteUserManager allOnlineUsers][indexPath.row];
  8. AliRenderView *view = model.view;
  9. _yuanDetectView = cell.facedetect;
  10. _yuanHandDetectView = cell.handdetect;
  11. [cell updateUserRenderview:view];
  12. return cell;
  13. }

回调方法:

  1. 远端视频流回调:
  1. - (void)onRemoteVideoSample:(NSString *)uid videoSource:(AliRtcVideoSource)videoSource videoSample:(AliRtcVideoDataSample *)videoSample {
  2. }
  1. 本地视频流回调:
  1. - (void)onCaptureVideoSample:(AliRtcVideoSource)videoSource videoSample:(AliRtcVideoDataSample *)videoSample {
  2. }
  1. 音频流回调:
  1. - (void)onAudioSampleCallback:(AliRtcAudioSource)audioSource audioSample:(AliRtcAudioDataSample *)audioSample {
  2. }

3. 接口的介绍

开始录制&结束录制&结束会议:

  1. //开始录制&结束录制&结束会议
  2. //"action": "START_RECORDING" //START_RECORDING STOP_RECORDING COMPLETED
  3. //START_RECORDING:开始录制接口,在你准备录制的时候调用
  4. //STOP_RECORDING:结束录制接口,在你准备结束录制的时候调用
  5. //COMPLETED:结束会议接口,一般在结束录制之后调用
  6. -(void)recordsChannel:(NSString*) action{
  7. NSDictionary * dic = @{@"Action":@"UpdateLive",@"LiveId":self.liveId,@"UserId":self.userId,@"Status":action};
  8. [idrs_NetWorking HTTPWithMethod:@"POST" body:dic success:^(id _Nonnull responseObject){
  9. NSLog(@"成功:%@",responseObject);
  10. } failure:^(NSError * _Nonnull error) {
  11. NSLog(@"错误%@",error);
  12. }];
  13. }

离开会议,一般是在中途离开会议调用

  1. -(void)exitChannel{
  2. //中途退出房间
  3. NSDictionary * dic = @{@"Action":@"ExitLive",@"Channel":self.channelName,@"UserId":self.userId};
  4. [idrs_NetWorking HTTPWithMethod:@"POST" body:dic success:^(id _Nonnull responseObject) {
  5. NSLog(@"成功:%@",responseObject);
  6. NSMutableDictionary *loginDic = [[NSMutableDictionary alloc]init];
  7. loginDic = responseObject[@"data"];
  8. } failure:^(NSError * _Nonnull error) {
  9. NSLog(@"错误%@",error);
  10. }];
  11. }

退出时还需要调用

  1. //停止本地预览
  2. [self.engine stopPreview];
  3. //离开频道
  4. [self.engine leaveChannel];
  5. //销毁SDK实例
  6. [AliRtcEngine destroy];

AI检测

1. 初始化

  1. -(void)initIDRSSDK{
  2. // AI检测
  3. [IDRSSDK initWithAudioCaptureType:FEED_CAPTURE_AUDIO url:@"http://console.idrs.aliyuncs.com" appId:@"申请的appid" packageName:@"申请的bundle_id" deviceId:@"设备id" success:^(id responseObject) {
  4. self->_idrsSDK = responseObject;
  5. } failure:^(NSError *error) {
  6. NSLog(@"IDRSSDK激活失败%@",error);
  7. }];
  8. //本地人脸检测util
  9. _localFaceDetectUtil = [FaceDetectUtil init];
  10. }

2. AI能力检测

以下代码有删减,只提出用法,不添加逻辑,具体用法以demo中为准。

  • 人脸检测
  • 用数据流检测:
  1. uint8_t *data = [IDRSUtils convert420PixelBufferToRawData:newBuffer];
  2. IDRSFaceDetectParam *detectParam = [[IDRSFaceDetectParam alloc]init];
  3. detectParam.dataType = IDRSFaceDetectInputTypeChar;
  4. detectParam.data = data;
  5. detectParam.width = videoSample.width;
  6. detectParam.height = videoSample.height;
  7. detectParam.format = 0;
  8. detectParam.inputAngle = 0;
  9. [_idrsSDK faceTrackFromVideo:detectParam faceDetectionCallback:^(NSError *error, NSArray<FaceDetectionOutput *> *faces) {
  10. }];
  11. free(data);

用image检测:

  1. UIImage *image = [self.faceSDK getImageFromRPVideo:newBuffer];
  2. IDRSFaceDetectParam *detectParam = [[IDRSFaceDetectParam alloc]init];
  3. detectParam.dataType = IDRSFaceDetectInputTypeImage;
  4. detectParam.image = image;
  5. detectParam.inputAngle = 0;
  6. detectParam.outputAngle = 0;
  7. detectParam.faceNetType = 0;
  8. detectParam.supportFaceRecognition = isLocalFaceChanged;
  9. detectParam.supportFaceLiveness = isLocalFaceChanged;
  10. [_idrsSDK faceTrackFromImage:detectParam faceDetectionCallback:^(NSError *error, NSArray<FaceDetectionOutput *> *faces) {
  11. }];

用本地相机流检测(远程双录不用此方法检测):

  1. CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
  2. IDRSFaceDetectParam *detectParam = [[IDRSFaceDetectParam alloc]init];
  3. detectParam.dataType = IDRSFaceDetectInputTypePixelBuffer;
  4. detectParam.buffer = pixelBuffer;
  5. detectParam.inputAngle = inAngle;
  6. detectParam.outputAngle = outAngle;
  7. [_idrsSDK faceTrackFromVideo:detectParam faceDetectionCallback:^(NSError *error, NSArray<FaceDetectionOutput*> *faces) {
  8. dispatch_async(dispatch_get_main_queue(), ^{
  9. }];

人脸采集

  1. UIImage*face = [weakSelf screenshotFace:newBuffer];
  2. //获取image中的人脸特征
  3. IDRSFaceDetectParam *dete = [[IDRSFaceDetectParam alloc]init];
  4. dete.dataType = IDRSFaceDetectInputTypeImage;
  5. dete.image = face;
  6. dete.inputAngle = 0;
  7. dete.outputAngle = 0;
  8. NSArray<FaceDetectionOutput *> *imageface = [weakSelf.faceSDK detectFace:dete];
  9. //保存投保人的人脸特征
  10. if (imageface.count > 0) {
  11. weakSelf.faceFeature = imageface[0].feature;
  12. //采集成功提示
  13. [weakSelf SuccessShow];
  14. }

人脸对比

  1. NSMutableArray *myFaces=[[NSMutableArray alloc] init];
  2. for (FaceDetectionOutput *face in faces) {
  3. float score =[weakSelf.faceSDK faceRecognitionSimilarity:face.feature feature2:weakSelf.faceFeature];
  4. [myFaces addObject:[NSNumber numberWithFloat:score]];
  5. }
  6. NSDictionary* face_Max = [weakSelf MaxFaceWithArray:myFaces];
  7. if ([face_Max[@"max_number"] floatValue] > 0.5) {
  8. NSString*named = @"";
  9. if (weakSelf.Role == 1 || weakSelf.Role == 3) {
  10. named = @"保险代理";
  11. }else if (weakSelf.Role == 2){
  12. named = @"投保人";
  13. }else{
  14. named = @"受益人";
  15. }
  16. faces[[face_Max[@"max_index"] intValue]].label = named;
  17. }

身份证识别

  1. if (weakSelf.isOCR) {
  2. @autoreleasepool {
  3. // 2. ocr身份证。 注意,这里isFrontCamera 要传NO
  4. IDRSIDCardDetectParam *idCardParam = [[IDRSIDCardDetectParam alloc]init];
  5. idCardParam.dataType = IDRSIDCardInputTypePixelBuffer;
  6. idCardParam.buffer = newBuffer;
  7. NSArray<NSNumber*> *kXMediaOptionsROIKey = @[@(0.2),@(0.2),@(0.6),@(0.6)];
  8. IDCardDetectionOutput *ocrResult = [_idrsSDK detectIDCard:idCardParam roiKey:kXMediaOptionsROIKey rotate:@(0) isFrontCamera:NO isDetectFrontIDCard:YES];
  9. if (ocrResult!=nil && ocrResult.num.length>0) {
  10. // 采集到了身份证信息
  11. writeFrames++;
  12. if (writeFrames>2) {
  13. dispatch_async(dispatch_get_main_queue(), ^{
  14. writeFrames = 0;
  15. //更新meta文件
  16. [self.idrsSDK addPolicy:ocrResult.num title:@"人身保险投保提示书"];
  17. // 显示身份证信息
  18. self.IDCarResult.text = ocrResult.num;
  19. [self SuccessShow];
  20. });
  21. }
  22. }
  23. }
  24. }

活体检测

  1. FaceDetectionOutput *firstResult = [faces firstObject];
  2. int liveType = [self.localFaceDetectUtil getLiveType:faces[0].faceId];
  3. NSString *type = @"";
  4. if (liveType == 0){
  5. type = @"真人";
  6. }else if (liveType == 1){
  7. type = @"翻拍";
  8. }
  9. //后边放具体的操作方法:如果不是真人则弹框提示等等、、、

动作识别:动态手势

  1. if (weakSelf.isHand) {
  2. @autoreleasepool {
  3. IDRSHandDetectParam *handParam = [[IDRSHandDetectParam alloc]init];
  4. handParam.dataType = IDRSHandInputTypeRGBA;
  5. handParam.buffer = newBuffer;
  6. handParam.outAngle = 0;
  7. NSArray<HandDetectionOutput *> *handResults = [_idrsSDK detectHandGesture:handParam];
  8. if(handResults.count > 0) {
  9. if (handResults[0].phone_touched_score>0) {
  10. if(handResults[0].hand_phone_action != 0) {
  11. writeFrames++;
  12. if (writeFrames>2) {
  13. [self SuccessShow];
  14. }
  15. }
  16. //手写框
  17. dispatch_async(dispatch_get_main_queue(), ^{
  18. self.handDetectView.hidden = NO;
  19. CGSize size = CGSizeMake(videoSample.width, videoSample.height);
  20. [self createViewFrame:size];
  21. HandDetectView *handDetectView = self.handDetectView;
  22. handDetectView.presetSize = size;
  23. handDetectView.detectResult = handResults;
  24. });
  25. }
  26. }else{
  27. dispatch_async(dispatch_get_main_queue(), ^{
  28. if (weakSelf.handDetectView.hidden == NO) {
  29. HandDetectView *handDetectView = (HandDetectView*)weakSelf.handDetectView;
  30. handDetectView.hidden = YES;
  31. }
  32. });
  33. }
  34. }
  35. }

动作识别:静态手势

  1. if(self.isStaticHand) {
  2. IDRSHandDetectParam *handParam = [[IDRSHandDetectParam alloc]init];
  3. handParam.dataType = IDRSHandInputTypeRGBA;
  4. handParam.buffer = newBuffer;
  5. handParam.outAngle = 0;
  6. NSArray<HandDetectionOutput *> *handResults = [_idrsSDK detectHandStaticGesture:handParam];
  7. dispatch_async(dispatch_get_main_queue(), ^{
  8. if (handResults.count > 0) {
  9. self.handDetectView.hidden = NO;
  10. CGSize size = CGSizeMake(videoSample.width, videoSample.height);
  11. [self createViewFrame:size];
  12. HandDetectView *handDetectView = self.handDetectView;
  13. handDetectView.presetSize = size;
  14. handDetectView.detectResult = handResults;
  15. HandDetectionOutput *handResult = handResults[0];
  16. if (handResult.hand_action_type == 0 && handResult.hand_static_action > 0) {
  17. if (handResult.hand_static_action == 6) {
  18. NSLog(@"手势:heart——比心");
  19. }else if (handResult.hand_static_action == 12){
  20. NSLog(@"手势:GOOD——大拇指");
  21. }
  22. }
  23. }else{
  24. dispatch_async(dispatch_get_main_queue(), ^{
  25. if (weakSelf.handDetectView.hidden == NO) {
  26. HandDetectView *handDetectView = (HandDetectView*)weakSelf.handDetectView;
  27. handDetectView.hidden = YES;
  28. }
  29. });
  30. }
  31. });
  32. }

人脸框(手势框)

  • 手势框与人脸框用法一致
  1. //人脸框位置计算代码位置:FaceDetectView.m
  2. //脸的标识框:
  3. dispatch_async(dispatch_get_main_queue(), ^{
  4. self.detectView.hidden = NO;
  5. CGSize size = CGSizeMake(videoSample.width, videoSample.height);//告诉人脸框返回的视频流的原大小
  6. [self createViewFrame:size];
  7. FaceDetectView *faceDetectView = self.detectView;
  8. faceDetectView.presetSize = size;
  9. faceDetectView.detectResult = faces;
  10. });

3. 激活词识别

激活词使用

  1. if (_isOnNui) {
  2. //回调方法
  3. _idrsSDK.onNuiCallback = ^(NSString *result) {
  4. dispatch_async(dispatch_get_main_queue(), ^{
  5. NSLog(@"------%@",result);
  6. switch (weakSelf.StepNo) {
  7. case 3:
  8. if ([result isEqual: @"\"同意\""]) {
  9. [weakSelf SuccessShow];
  10. }
  11. break;
  12. case 6:
  13. if ([result isEqual: @"\"清楚\""]) {
  14. [weakSelf SuccessShow];
  15. }
  16. default:
  17. break;
  18. }
  19. });
  20. };
  21. [_idrsSDK feedAudioFrame:frame];//此句为设置检测的音频流
  22. [_idrsSDK startDialog];
  23. }

流程控制

demo里的流程控制主要是通过Hander进行控制的,实现类为RemoteHandler,流程控制比较复杂,先介绍下大致的过程。

基本流程介绍

首先包括11个流程

  1. #define START_RECORD @"开始录制"
  2. #define INSURANCE_AGENT @"保险代理"
  3. #define TOU_BAO_REN @"投保人"
  4. #define PRIVACY @"隐私"
  5. #define SELF_INTRODUCYION @"自我介绍"
  6. #define CONTENT_HINT @"内容提示"
  7. #define WARN_RISK @"风险预警"
  8. #define START_SIGN @"签字"
  9. #define START_SIGN1 @"签字1"
  10. #define START_SIGN2 @"签字2"
  11. #define STOP_RECORD @"结束录制"

按照从上到下的顺序进行。

  • 然后包括三个角色:
    • 保险代理
    • 投保人
    • 受益人
  • 然后每个角色的检测项是可配置的,比如
    • 保险代理包括检测项:START_RECORD,INSURANCE_AGENT,PRIVACY,WARN_RISK
    • 投保人包括检测项:TOU_BAO_REN
    • 受益人包括检测项:SELF_INTRODUCTION,START_SIGN1
  • 每个远端代表的角色是可配置的,比如:
    • 一端为 :保险代理
    • 另一端为 :投保人和受益人
  • 这样的话流程配置有很多种的变化,目前demo实现的就是上面介绍的这种,根据需求的不同,需要做相应的改动,这个属于业务逻辑的内容,可以按照demo去做,也可以自己去实现,demo不是最优的方式。
  • 大概介绍下demo的实现方式。
  1. result =-1 :无状态;
  2. =1 :超时;//根据章节去显示不同的超时信息。//所有的失败都是1
  3. =4 :“已采集保代头像”
  4. =5 “已采集投保人头像”
  5. =6 :身份证号码已识别“
  6. =7 : ”检测到签字动作“;
  7. =14: 激活词成功;

流程同步问题章节,信息同步问题

由于多端远程会议,每端都需要做检测,所以章节需要同步,检测结果也需要同步,比如:

  • 目前通过端A和B进行远程双录
  1. A端目前在第2阶段
  2. B端也在第2阶段
  3. A端需要做人脸采集,采集完成后进入第3阶段
  4. 这时B端也需要同步进入第三阶段,也需要显示A端人脸采集是否成功,信息的同步

所以设计章节的同步和信息的同步,解决这个问题采用了接口轮询的方式,涉及俩个接口:

  1. 获取章节
  1. -(void)runLoopGet{
  2. __weak __typeof(self) weakSelf = self;
  3. NSString *method = [NSString stringWithFormat:@"/api/lives/%@/section",self.liveId] ;
  4. [idrs_NetWorking HttpGetWithMethod:method success:^(id _Nonnull responseObject) {
  5. NSLog(@"GET请求返回数据\n%@",responseObject);
  6. NSString * sectionString = responseObject[@"data"][@"section"];
  7. .....//这里判断网络取回的状态是否与本地一样,不一样则本地进行新的状态项
  8. if (self.isLoopRun) {
  9. [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(runLoopGet) userInfo:nil repeats:false];
  10. }
  11. } failure:^(NSError * _Nonnull error) {
  12. //调用失败需处理Error。
  13. NSLog(@"GET请求出错了%@",error);
  14. }];
  15. }

这个接口的作用是不断的轮询,拉取最新的章节和信息,每端都需要轮询此接口,获取最新信息。

  1. 推送章节
  1. -(void)putMsg:(BOOL)isFirst WithResult:(int)res{
  2. NSNumber *result = [NSNumber numberWithInt:res];
  3. NSString *section = @"";
  4. if (isFirst){
  5. if ([_currentStep isEqual:START_RECORD]) {
  6. section = INSURANCE_AGENT;
  7. }else if ([_currentStep isEqual:INSURANCE_AGENT]){
  8. section = TOU_BAO_REN;
  9. }else if ([_currentStep isEqual:TOU_BAO_REN]){
  10. section = PRIVACY;
  11. }else if ([_currentStep isEqual:PRIVACY]){
  12. section = SELF_INTRODUCYION;
  13. }else if ([_currentStep isEqual:SELF_INTRODUCYION]){
  14. section = CONTENT_HINT;
  15. }else if ([_currentStep isEqual:CONTENT_HINT]){
  16. section = WARN_RISK;
  17. }else if ([_currentStep isEqual:WARN_RISK]){
  18. section = START_SIGN;
  19. }else if ([_currentStep isEqual:START_SIGN]){
  20. section = START_SIGN1;
  21. }else if ([_currentStep isEqual:START_SIGN1]){
  22. section = START_SIGN2;
  23. }else if ([_currentStep isEqual:START_SIGN2]){
  24. section = STOP_RECORD;
  25. }
  26. }else{
  27. section = _currentStep;
  28. }
  29. NSDictionary*dicc = @{@"name":section,@"result":result};
  30. NSData *dataString = [NSJSONSerialization dataWithJSONObject:dicc
  31. options:NSJSONReadingMutableLeaves | NSJSONReadingAllowFragments
  32. error:nil];
  33. NSString *dicString = [[NSString alloc] initWithData:dataString
  34. encoding:NSUTF8StringEncoding];
  35. //提交章节信息。
  36. NSString *method = [NSString stringWithFormat:@"/api/lives/%@/section",_liveId] ;
  37. NSDictionary*dic = @{@"liveId":_liveId,@"section":dicString};
  38. NSLog(@"上传的章节为:%@",dic[@"section"]);
  39. [idrs_NetWorking HttpWithPost_Get:@"PUT" WithMethod:method Body:dic success:^(id _Nonnull responseObject) {
  40. } failure:^(NSError * _Nonnull error) {
  41. NSLog(@"章节出错了%@",error);
  42. }];
  43. }

参数如下:

  • isFirst:是否第一次上传此状态
  • res:状态码这个的作用是把最新的章节推送到服务器,供其他端获取最新的章节。
  • 实现方式:
  1. A端目前在第2阶段
  2. B端也在第2阶段
  3. A端需要做人脸采集,采集完成后进入第3阶段,推送最新的章节到服务器
  4. 这时B端不断轮询获取章节接口,拉取到最新章节后更新

活体信息同步和人脸同框同步

  • 当一端识别非真人时,需要把所有端都同步为非真人,并且把流程暂停。
  • 当一端的角色离开视频的时候,需要同步各端有人离开,并把流程终止。
  • 这个的实现涉及三个接口:
1. 上传问题
  1. -(void)putQuestion:(NSString*)event{
  2. //上传服务器
  3. NSString *method = [NSString stringWithFormat:@"/api/lives/%@/event",_liveId] ;
  4. NSDictionary*dic = @{@"event":event,@"userId":self.userId};
  5. [idrs_NetWorking HttpWithPost_Get:@"POST" WithMethod:method Body:dic success:^(id _Nonnull responseObject) {
  6. } failure:^(NSError * _Nonnull error) {
  7. //处理错误信息
  8. }];
  9. }

参数如下:

  • event:问题信息(例:投保人为非真人)
  • 一端出现了问题,就向服务器上报,服务器会保存异常信息。
2. 删除问题
  1. -(void)deleteQuestion{
  2. NSString *method = [NSString stringWithFormat:@"/api/lives/%@/event",_liveId] ;
  3. NSDictionary*dic = @{@"userId":self.userId};
  4. [idrs_NetWorking HttpWithPost_Get:@"DELETE" WithMethod:method Body:dic success:^(id _Nonnull responseObject) {
  5. } failure:^(NSError * _Nonnull error) {
  6. //处理错误信息
  7. }];
  8. }

某端问题解除,就删除服务器的异常信息。

3. 获取异常信息
  1. -(void)getQuestion{
  2. //获取问题
  3. NSString *method = [NSString stringWithFormat:@"/api/lives/%@/events",_liveId] ;
  4. [idrs_NetWorking HttpWithPost_Get:@"GET" WithMethod:method Body:nil success:^(id _Nonnull responseObject) {
  5. NSDictionary* data = [responseObject objectForKey:@"data"];
  6. NSArray*events = data[@"events"];
  7. //异常信息显示及App层操作处理
  8. if (_isLoopRun) {
  9. [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(getQuestion) userInfo:nil repeats:false];
  10. }
  11. } failure:^(NSError * _Nonnull error) {
  12. //处理报错信息
  13. }];
  14. }

每端都需要不断轮询这个接口查询是否有错误信息,如果有就停止流程,没有就恢复流程。

播放旁白

调用RTC的接口播放旁白

  1. [_engine startAudioAccompanyWithFile:filePath onlyLocalPlay:false replaceMic:false loopCycles:1];

参数介绍

  • 播放的MP4文件地址,支持网络的url
  • 是否仅本地播放,true表示仅仅本地播放,false表示本地播放且推流到远端。
  • 是否替换mic的音频流,true表示伴奏音频流替换本地mic音频流,false表示伴奏音频流和mic音频流同时推
  • 循环播放次数,-1表示一直循环。

辅助信息上传

拿到上传的地址url

  1. [idrs_NetWorkingManager ossUpdataWithfileName:_metaFileName block:^(id _Nonnull response, NSError * _Nonnull err) {
  2. NSString* Url = response[@"Data"];
  3. }];

首先调用上方的接口,参数如下

  1. - _metaFileName:你的meta文件的名字
  2. - response:网络请求结果回调
  3. - 回调结果:如果成功会返回一个url,这个url用于实现真正的上传

上方成功拿到上传的url之后,就可以实现真正的上传了

  1. NSString* filePath = [_idrsSDK saveMetaWithfileName:_metaFileName andfilePath:_metaFilePath];
  2. [idrs_NetWorkingManager updataFileWithUrl:url filePath:filePath complete:^(id _Nonnull responseObject, NSError * _Nonnull error) {
  3. [self detections:url];
  4. NSLog(@"上传成功:%@",responseObject);
  5. }];

参数如下

  • url:上方返回的url,也是上传的地址
  • filePath:本地mate文件的路径
  • responseObject:网络请求结果回调

这个如果调用成功,那么就上传成功了