iOS8 Core Image In Swift:自動改善圖像以及內置濾鏡的使用
iOS8 Core Image In Swift:更復雜的濾鏡
iOS8 Core Image In Swift:人臉檢測以及馬賽克
iOS8 Core Image In Swift:視頻實時濾鏡
class ViewController: UIViewController , AVCaptureVideoDataOutputSampleBufferDelegate {
var captureSession: AVCaptureSession!
var previewLayer: CALayer!
......
除此之外,我還對VC實現了AVCaptureVideoDataOutputSampleBufferDelegate協議,這個會在后面說。 要使用AV框架,必須先引入庫:import AVFoundation |
override func viewDidLoad() {
super.viewDidLoad()
previewLayer = CALayer()
previewLayer.bounds = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width);
previewLayer.position = CGPointMake(self.view.frame.size.width / 2.0, self.view.frame.size.height / 2.0);
previewLayer.setAffineTransform(CGAffineTransformMakeRotation(CGFloat(M_PI / 2.0)));
self.view.layer.insertSublayer(previewLayer, atIndex: 0)
setupCaptureSession()
}
func setupCaptureSession() {
captureSession = AVCaptureSession()
captureSession.beginConfiguration()
captureSession.sessionPreset = AVCaptureSessionPresetLow
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let deviceInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: nil) as AVCaptureDeviceInput
if captureSession.canAddInput(deviceInput) {
captureSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput) {
captureSession.addOutput(dataOutput)
}
let queue = dispatch_queue_create("VideoQueue", DISPATCH_QUEUE_SERIAL)
dataOutput.setSampleBufferDelegate(self, queue: queue)
captureSession.commitConfiguration()
}
從這個方法開始,就算正式開始了。
我們現在完成一個session的建立過程,但這個session還沒有開始工作,就像我們訪問數據庫的時候,要先打開數據庫---然后建立連接---訪問數據---關閉連接---關閉數據庫一樣,我們在openCamera方法里啟動session:
@IBAction func openCamera(sender: UIButton) {
sender.enabled = false
captureSession.startRunning()
}
optional func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!)
這個回調就可以了:
func captureOutput(captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
fromConnection connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0)
let height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0)
let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)
let lumaBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
let grayColorSpace = CGColorSpaceCreateDeviceGray()
let context = CGBitmapContextCreate(lumaBuffer, width, height, 8, bytesPerRow, grayColorSpace, CGBitmapInfo.allZeros)
let cgImage = CGBitmapContextCreateImage(context)
dispatch_sync(dispatch_get_main_queue(), {
self.previewLayer.contents = cgImage
})
}
var filter: CIFilter!
lazy var context: CIContext = {
let eaglContext = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let options = [kCIContextWorkingColorSpace : NSNull()]
return CIContext(EAGLContext: eaglContext, options: options)
}()
實際上,通過contextWithOptions:創建的GPU的context,雖然渲染是在GPU上執行,但是其輸出的image是不能顯示的, 只有當其被復制回CPU存儲器上時,才會被轉成一個可被顯示的image類型,比如UIImage。 |
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
替換為
kCVPixelFormatType_32BGRA
再把session的回調進行一些修改,變成我們熟悉的方式,就像這樣:
func captureOutput(captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
fromConnection connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// CVPixelBufferLockBaseAddress(imageBuffer, 0)
// let width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0)
// let height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0)
// let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)
生活不易,碼農辛苦
如果您覺得本網站對您的學習有所幫助,可以手機掃描二維碼進行捐贈
下一篇 cdq分治