- Revision
- 280724
- Author
- repst...@apple.com
- Date
- 2021-08-06 09:03:29 -0700 (Fri, 06 Aug 2021)
Log Message
Cherry-pick r280702. rdar://problem/81618758
[iOS] getUserMedia sometimes doesn't capture from specified microphone
https://bugs.webkit.org/show_bug.cgi?id=228753
rdar://79704226
Reviewed by Youenn Fablet.
Source/WebCore:
The system will always choose the "default" audio input source unless
+[AVAudioSession setPreferredInput:error:] is called first, and that only works
if the audio session category has been set to PlayAndRecord *before* it is called,
so configure the audio session for recording before we choose and configure the
audio capture device.
Tested manually, this only reproduces on hardware.
* platform/audio/PlatformMediaSessionManager.cpp:
(WebCore::PlatformMediaSessionManager::activeAudioSessionRequired const): Audio
capture requires an active audio session.
(WebCore::PlatformMediaSessionManager::removeSession): Move `#if USE(AUDIO_SESSION)`
guard inside of maybeDeactivateAudioSession so it isn't spread throughout the file.
(WebCore::PlatformMediaSessionManager::sessionWillBeginPlayback): Ditto.
(WebCore::PlatformMediaSessionManager::processWillSuspend): Ditto.
(WebCore::PlatformMediaSessionManager::processDidResume): Ditto.
(WebCore::PlatformMediaSessionManager::sessionCanProduceAudioChanged): Add logging,
call `maybeActivateAudioSession()` so we activate the audio session if necessary.
(WebCore::PlatformMediaSessionManager::addAudioCaptureSource): Call updateSessionState
instead of scheduleUpdateSessionState so the audio session category is updated
immediately.
(WebCore::PlatformMediaSessionManager::maybeDeactivateAudioSession): Move
`#if USE(AUDIO_SESSION)` into the function so it doesn't need to be spread
throughout the file.
(WebCore::PlatformMediaSessionManager::maybeActivateAudioSession): Ditto.
* platform/audio/PlatformMediaSessionManager.h:
(WebCore::PlatformMediaSessionManager::isApplicationInBackground const):
* platform/audio/ios/AudioSessionIOS.mm:
(WebCore::AudioSessionIOS::setPreferredBufferSize): Log an error if we are unable
to set the preferred buffer size.
* platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h:
* platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm:
(WebCore::AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID):
New, set the preferred input so capture will use select the device we want.
(WebCore::AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices): Remove
m_recomputeDevices, `setAudioCaptureDevices` has been restructured so we don't need it.
(WebCore::AVAudioSessionCaptureDeviceManager::computeCaptureDevices): Ditto.
(WebCore::AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices): Don't update
the list of capture devices when the default device changes, only when a device is
added, removed, enabled, or disabled.
* platform/mediastream/mac/CoreAudioCaptureSource.cpp:
(WebCore::CoreAudioSharedUnit::setCaptureDevice): Call `setPreferredAudioSessionDeviceUID`
so the correct device is selected.
(WebCore::CoreAudioSharedUnit::cleanupAudioUnit): Clear m_persistentID.
(WebCore::CoreAudioCaptureSource::create): Return an error with a string, or the
web process can detect a failure.
(WebCore::CoreAudioCaptureSource::stopProducingData): Add logging.
Source/WebKit:
* UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: Re
(WebKit::UserMediaCaptureManagerProxy::SourceProxy::audioUnitWillStart): Delete,
we don't need it now that the web process configures the audio session before
capture begins.
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@280702 268f45cc-cd09-0410-ab3c-d52691b4dbfc
Modified Paths
Diff
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/ChangeLog (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/ChangeLog 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/ChangeLog 2021-08-06 16:03:29 UTC (rev 280724)
@@ -1,3 +1,135 @@
+2021-08-06 Russell Epstein <repst...@apple.com>
+
+ Cherry-pick r280702. rdar://problem/81618758
+
+ [iOS] getUserMedia sometimes doesn't capture from specified microphone
+ https://bugs.webkit.org/show_bug.cgi?id=228753
+ rdar://79704226
+
+ Reviewed by Youenn Fablet.
+
+ Source/WebCore:
+
+ The system will always choose the "default" audio input source unless
+ +[AVAudioSession setPreferredInput:error:] is called first, and that only works
+ if the audio session category has been set to PlayAndRecord *before* it is called,
+ so configure the audio session for recording before we choose and configure the
+ audio capture device.
+
+ Tested manually, this only reproduces on hardware.
+
+ * platform/audio/PlatformMediaSessionManager.cpp:
+ (WebCore::PlatformMediaSessionManager::activeAudioSessionRequired const): Audio
+ capture requires an active audio session.
+ (WebCore::PlatformMediaSessionManager::removeSession): Move `#if USE(AUDIO_SESSION)`
+ guard inside of maybeDeactivateAudioSession so it isn't spread throughout the file.
+ (WebCore::PlatformMediaSessionManager::sessionWillBeginPlayback): Ditto.
+ (WebCore::PlatformMediaSessionManager::processWillSuspend): Ditto.
+ (WebCore::PlatformMediaSessionManager::processDidResume): Ditto.
+ (WebCore::PlatformMediaSessionManager::sessionCanProduceAudioChanged): Add logging,
+ call `maybeActivateAudioSession()` so we activate the audio session if necessary.
+ (WebCore::PlatformMediaSessionManager::addAudioCaptureSource): Call updateSessionState
+ instead of scheduleUpdateSessionState so the audio session category is updated
+ immediately.
+ (WebCore::PlatformMediaSessionManager::maybeDeactivateAudioSession): Move
+ `#if USE(AUDIO_SESSION)` into the function so it doesn't need to be spread
+ throughout the file.
+ (WebCore::PlatformMediaSessionManager::maybeActivateAudioSession): Ditto.
+ * platform/audio/PlatformMediaSessionManager.h:
+ (WebCore::PlatformMediaSessionManager::isApplicationInBackground const):
+
+ * platform/audio/ios/AudioSessionIOS.mm:
+ (WebCore::AudioSessionIOS::setPreferredBufferSize): Log an error if we are unable
+ to set the preferred buffer size.
+
+ * platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h:
+ * platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm:
+ (WebCore::AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID):
+ New, set the preferred input so capture will use select the device we want.
+ (WebCore::AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices): Remove
+ m_recomputeDevices, `setAudioCaptureDevices` has been restructured so we don't need it.
+ (WebCore::AVAudioSessionCaptureDeviceManager::computeCaptureDevices): Ditto.
+ (WebCore::AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices): Don't update
+ the list of capture devices when the default device changes, only when a device is
+ added, removed, enabled, or disabled.
+
+ * platform/mediastream/mac/CoreAudioCaptureSource.cpp:
+ (WebCore::CoreAudioSharedUnit::setCaptureDevice): Call `setPreferredAudioSessionDeviceUID`
+ so the correct device is selected.
+ (WebCore::CoreAudioSharedUnit::cleanupAudioUnit): Clear m_persistentID.
+ (WebCore::CoreAudioCaptureSource::create): Return an error with a string, or the
+ web process can detect a failure.
+ (WebCore::CoreAudioCaptureSource::stopProducingData): Add logging.
+
+ Source/WebKit:
+
+ * UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: Re
+ (WebKit::UserMediaCaptureManagerProxy::SourceProxy::audioUnitWillStart): Delete,
+ we don't need it now that the web process configures the audio session before
+ capture begins.
+
+
+ git-svn-id: https://svn.webkit.org/repository/webkit/trunk@280702 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+ 2021-08-05 Eric Carlson <eric.carl...@apple.com>
+
+ [iOS] getUserMedia sometimes doesn't capture from specified microphone
+ https://bugs.webkit.org/show_bug.cgi?id=228753
+ rdar://79704226
+
+ Reviewed by Youenn Fablet.
+
+ The system will always choose the "default" audio input source unless
+ +[AVAudioSession setPreferredInput:error:] is called first, and that only works
+ if the audio session category has been set to PlayAndRecord *before* it is called,
+ so configure the audio session for recording before we choose and configure the
+ audio capture device.
+
+ Tested manually, this only reproduces on hardware.
+
+ * platform/audio/PlatformMediaSessionManager.cpp:
+ (WebCore::PlatformMediaSessionManager::activeAudioSessionRequired const): Audio
+ capture requires an active audio session.
+ (WebCore::PlatformMediaSessionManager::removeSession): Move `#if USE(AUDIO_SESSION)`
+ guard inside of maybeDeactivateAudioSession so it isn't spread throughout the file.
+ (WebCore::PlatformMediaSessionManager::sessionWillBeginPlayback): Ditto.
+ (WebCore::PlatformMediaSessionManager::processWillSuspend): Ditto.
+ (WebCore::PlatformMediaSessionManager::processDidResume): Ditto.
+ (WebCore::PlatformMediaSessionManager::sessionCanProduceAudioChanged): Add logging,
+ call `maybeActivateAudioSession()` so we activate the audio session if necessary.
+ (WebCore::PlatformMediaSessionManager::addAudioCaptureSource): Call updateSessionState
+ instead of scheduleUpdateSessionState so the audio session category is updated
+ immediately.
+ (WebCore::PlatformMediaSessionManager::maybeDeactivateAudioSession): Move
+ `#if USE(AUDIO_SESSION)` into the function so it doesn't need to be spread
+ throughout the file.
+ (WebCore::PlatformMediaSessionManager::maybeActivateAudioSession): Ditto.
+ * platform/audio/PlatformMediaSessionManager.h:
+ (WebCore::PlatformMediaSessionManager::isApplicationInBackground const):
+
+ * platform/audio/ios/AudioSessionIOS.mm:
+ (WebCore::AudioSessionIOS::setPreferredBufferSize): Log an error if we are unable
+ to set the preferred buffer size.
+
+ * platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h:
+ * platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm:
+ (WebCore::AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID):
+ New, set the preferred input so capture will use select the device we want.
+ (WebCore::AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices): Remove
+ m_recomputeDevices, `setAudioCaptureDevices` has been restructured so we don't need it.
+ (WebCore::AVAudioSessionCaptureDeviceManager::computeCaptureDevices): Ditto.
+ (WebCore::AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices): Don't update
+ the list of capture devices when the default device changes, only when a device is
+ added, removed, enabled, or disabled.
+
+ * platform/mediastream/mac/CoreAudioCaptureSource.cpp:
+ (WebCore::CoreAudioSharedUnit::setCaptureDevice): Call `setPreferredAudioSessionDeviceUID`
+ so the correct device is selected.
+ (WebCore::CoreAudioSharedUnit::cleanupAudioUnit): Clear m_persistentID.
+ (WebCore::CoreAudioCaptureSource::create): Return an error with a string, or the
+ web process can detect a failure.
+ (WebCore::CoreAudioCaptureSource::stopProducingData): Add logging.
+
2021-08-05 Russell Epstein <repst...@apple.com>
Cherry-pick r280648. rdar://problem/81568979
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/PlatformMediaSessionManager.cpp (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/PlatformMediaSessionManager.cpp 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/PlatformMediaSessionManager.cpp 2021-08-06 16:03:29 UTC (rev 280724)
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013-2020 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2021 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -116,8 +116,11 @@
bool PlatformMediaSessionManager::activeAudioSessionRequired() const
{
- return anyOfSessions([] (auto& session) {
- return session.activeAudioSessionRequired();
+ if (anyOfSessions([] (auto& session) { return session.activeAudioSessionRequired(); }))
+ return true;
+
+ return WTF::anyOf(m_audioCaptureSources, [](auto& source) {
+ return source.isCapturingAudio();
});
}
@@ -199,10 +202,8 @@
m_sessions.remove(index);
-#if USE(AUDIO_SESSION)
if (hasNoSession())
maybeDeactivateAudioSession();
-#endif
#if !RELEASE_LOG_DISABLED
m_logger->removeLogger(session.logger());
@@ -237,17 +238,10 @@
return false;
}
-#if USE(AUDIO_SESSION)
- if (activeAudioSessionRequired()) {
- if (!AudioSession::sharedSession().tryToSetActive(true)) {
- ALWAYS_LOG(LOGIDENTIFIER, session.logIdentifier(), " returning false failed to set active AudioSession");
- return false;
- }
-
- ALWAYS_LOG(LOGIDENTIFIER, session.logIdentifier(), " sucessfully activated AudioSession");
- m_becameActive = true;
+ if (!maybeActivateAudioSession()) {
+ ALWAYS_LOG(LOGIDENTIFIER, session.logIdentifier(), " returning false, failed to activate AudioSession");
+ return false;
}
-#endif
if (m_interrupted)
endInterruption(PlatformMediaSession::NoFlags);
@@ -394,9 +388,7 @@
session.client().processIsSuspendedChanged();
});
-#if USE(AUDIO_SESSION)
maybeDeactivateAudioSession();
-#endif
}
void PlatformMediaSessionManager::processDidResume()
@@ -410,10 +402,8 @@
});
#if USE(AUDIO_SESSION)
- if (!m_becameActive && activeAudioSessionRequired()) {
- m_becameActive = AudioSession::sharedSession().tryToSetActive(true);
- ALWAYS_LOG(LOGIDENTIFIER, "tried to set active AudioSession, ", m_becameActive ? "succeeded" : "failed");
- }
+ if (!m_becameActive)
+ maybeActivateAudioSession();
#endif
}
@@ -437,6 +427,8 @@
void PlatformMediaSessionManager::sessionCanProduceAudioChanged()
{
+ ALWAYS_LOG(LOGIDENTIFIER);
+ maybeActivateAudioSession();
updateSessionState();
}
@@ -581,7 +573,7 @@
{
ASSERT(!m_audioCaptureSources.contains(source));
m_audioCaptureSources.add(source);
- scheduleUpdateSessionState();
+ updateSessionState();
}
@@ -603,9 +595,9 @@
});
}
-#if USE(AUDIO_SESSION)
void PlatformMediaSessionManager::maybeDeactivateAudioSession()
{
+#if USE(AUDIO_SESSION)
if (!m_becameActive || !shouldDeactivateAudioSession())
return;
@@ -612,9 +604,22 @@
ALWAYS_LOG(LOGIDENTIFIER, "tried to set inactive AudioSession");
AudioSession::sharedSession().tryToSetActive(false);
m_becameActive = false;
+#endif
}
+
+bool PlatformMediaSessionManager::maybeActivateAudioSession()
+{
+#if USE(AUDIO_SESSION)
+ if (!activeAudioSessionRequired())
+ return true;
+
+ m_becameActive = AudioSession::sharedSession().tryToSetActive(true);
+ ALWAYS_LOG(LOGIDENTIFIER, m_becameActive ? "successfully activated" : "failed to activate", " AudioSession");
+ return m_becameActive;
+#else
+ return true;
#endif
-
+}
static bool& deactivateAudioSession()
{
static bool deactivate;
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/PlatformMediaSessionManager.h (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/PlatformMediaSessionManager.h 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/PlatformMediaSessionManager.h 2021-08-06 16:03:29 UTC (rev 280724)
@@ -181,9 +181,8 @@
bool anyOfSessions(const Function<bool(const PlatformMediaSession&)>&) const;
bool isApplicationInBackground() const { return m_isApplicationInBackground; }
-#if USE(AUDIO_SESSION)
void maybeDeactivateAudioSession();
-#endif
+ bool maybeActivateAudioSession();
#if !RELEASE_LOG_DISABLED
const Logger& logger() const final { return m_logger; }
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/ios/AudioSessionIOS.mm (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/ios/AudioSessionIOS.mm 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/platform/audio/ios/AudioSessionIOS.mm 2021-08-06 16:03:29 UTC (rev 280724)
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2013-2019 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2021 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -301,6 +301,7 @@
NSError *error = nil;
float duration = bufferSize / sampleRate();
[[PAL::getAVAudioSessionClass() sharedInstance] setPreferredIOBufferDuration:duration error:&error];
+ RELEASE_LOG_ERROR_IF(error, Media, "failed to set preferred buffer duration to %f with error: %@", duration, error.localizedDescription);
ASSERT(!error);
}
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h 2021-08-06 16:03:29 UTC (rev 280724)
@@ -57,6 +57,8 @@
void enableAllDevicesQuery();
void disableAllDevicesQuery();
+ void setPreferredAudioSessionDeviceUID(const String&);
+
private:
AVAudioSessionCaptureDeviceManager();
~AVAudioSessionCaptureDeviceManager();
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm 2021-08-06 16:03:29 UTC (rev 280724)
@@ -144,12 +144,29 @@
return std::nullopt;
}
-void AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices()
+void AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID(const String& deviceUID)
{
- if (m_recomputeDevices)
+ AVAudioSessionPortDescription *preferredPort = nil;
+ NSString *nsDeviceUID = deviceUID;
+ for (AVAudioSessionPortDescription *portDescription in [m_audioSession availableInputs]) {
+ if ([portDescription.UID isEqualToString:nsDeviceUID]) {
+ preferredPort = portDescription;
+ break;
+ }
+ }
+
+ if (!preferredPort) {
+ RELEASE_LOG_ERROR(WebRTC, "failed to find preferred input '%{public}s'", deviceUID.ascii().data());
return;
+ }
- m_recomputeDevices = true;
+ NSError *error = nil;
+ if (![[PAL::getAVAudioSessionClass() sharedInstance] setPreferredInput:preferredPort error:&error])
+ RELEASE_LOG_ERROR(WebRTC, "failed to set preferred input to '%{public}s' with error: %@", deviceUID.ascii().data(), error.localizedDescription);
+}
+
+void AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices()
+{
computeCaptureDevices([] { });
}
@@ -180,13 +197,9 @@
});
}
- if (!m_recomputeDevices)
- return;
-
m_dispatchQueue->dispatch([this, completion = WTFMove(completion)] () mutable {
auto newAudioDevices = retrieveAudioSessionCaptureDevices();
callOnWebThreadOrDispatchAsyncOnMainThread(makeBlockPtr([this, completion = WTFMove(completion), newAudioDevices = WTFMove(newAudioDevices).isolatedCopy()] () mutable {
- m_recomputeDevices = false;
setAudioCaptureDevices(WTFMove(newAudioDevices));
completion();
}).get());
@@ -220,17 +233,34 @@
void AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices(Vector<AVAudioSessionCaptureDevice>&& newAudioDevices)
{
bool firstTime = !m_devices;
- bool haveDeviceChanges = !m_devices || newAudioDevices.size() != m_devices->size();
- if (!haveDeviceChanges) {
- for (size_t i = 0; i < newAudioDevices.size(); ++i) {
- auto& oldState = (*m_devices)[i];
- auto& newState = newAudioDevices[i];
- if (newState.type() != oldState.type() || newState.persistentId() != oldState.persistentId() || newState.enabled() != oldState.enabled() || newState.isDefault() != oldState.isDefault())
- haveDeviceChanges = true;
+ bool deviceListChanged = newAudioDevices.size() != m_devices->size();
+ bool defaultDeviceChanged = false;
+ if (!deviceListChanged && !firstTime) {
+ for (auto& newState : newAudioDevices) {
+
+ std::optional<CaptureDevice> oldState;
+ for (const auto& device : m_devices.value()) {
+ if (device.type() == newState.type() && device.persistentId() == newState.persistentId()) {
+ oldState = device;
+ break;
+ }
+ }
+
+ if (!oldState.has_value()) {
+ deviceListChanged = true;
+ break;
+ }
+ if (newState.isDefault() != oldState.value().isDefault())
+ defaultDeviceChanged = true;
+
+ if (newState.enabled() != oldState.value().enabled()) {
+ deviceListChanged = true;
+ break;
+ }
}
}
- if (!haveDeviceChanges && !firstTime)
+ if (!deviceListChanged && !firstTime && !defaultDeviceChanged)
return;
auto newDevices = copyToVectorOf<CaptureDevice>(newAudioDevices);
@@ -240,7 +270,7 @@
});
m_devices = WTFMove(newDevices);
- if (!firstTime)
+ if (deviceListChanged && !firstTime)
deviceChanged();
}
Modified: branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.cpp (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.cpp 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebCore/platform/mediastream/mac/CoreAudioCaptureSource.cpp 2021-08-06 16:03:29 UTC (rev 280724)
@@ -163,6 +163,9 @@
void CoreAudioSharedUnit::setCaptureDevice(String&& persistentID, uint32_t captureDeviceID)
{
+ if (m_persistentID == persistentID)
+ return;
+
m_persistentID = WTFMove(persistentID);
#if PLATFORM(MAC)
@@ -173,6 +176,7 @@
reconfigureAudioUnit();
#else
UNUSED_PARAM(captureDeviceID);
+ AVAudioSessionCaptureDeviceManager::singleton().setPreferredAudioSessionDeviceUID(m_persistentID);
#endif
}
@@ -466,6 +470,7 @@
m_microphoneSampleBuffer = nullptr;
m_speakerSampleBuffer = nullptr;
+ m_persistentID = emptyString();
#if !LOG_DISABLED
m_ioUnitName = emptyString();
#endif
@@ -619,7 +624,7 @@
#elif PLATFORM(IOS_FAMILY)
auto device = AVAudioSessionCaptureDeviceManager::singleton().audioSessionDeviceWithUID(WTFMove(deviceID));
if (!device)
- return { };
+ return { "No AVAudioSessionCaptureDevice device"_s };
auto source = adoptRef(*new CoreAudioCaptureSource(WTFMove(deviceID), String { device->label() }, WTFMove(hashSalt), 0));
#endif
@@ -760,6 +765,7 @@
void CoreAudioCaptureSource::stopProducingData()
{
+ ALWAYS_LOG_IF(loggerPtr(), LOGIDENTIFIER);
unit().stopProducingData();
}
Modified: branches/safari-612.1.27.0-branch/Source/WebKit/ChangeLog (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebKit/ChangeLog 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebKit/ChangeLog 2021-08-06 16:03:29 UTC (rev 280724)
@@ -1,3 +1,89 @@
+2021-08-06 Russell Epstein <repst...@apple.com>
+
+ Cherry-pick r280702. rdar://problem/81618758
+
+ [iOS] getUserMedia sometimes doesn't capture from specified microphone
+ https://bugs.webkit.org/show_bug.cgi?id=228753
+ rdar://79704226
+
+ Reviewed by Youenn Fablet.
+
+ Source/WebCore:
+
+ The system will always choose the "default" audio input source unless
+ +[AVAudioSession setPreferredInput:error:] is called first, and that only works
+ if the audio session category has been set to PlayAndRecord *before* it is called,
+ so configure the audio session for recording before we choose and configure the
+ audio capture device.
+
+ Tested manually, this only reproduces on hardware.
+
+ * platform/audio/PlatformMediaSessionManager.cpp:
+ (WebCore::PlatformMediaSessionManager::activeAudioSessionRequired const): Audio
+ capture requires an active audio session.
+ (WebCore::PlatformMediaSessionManager::removeSession): Move `#if USE(AUDIO_SESSION)`
+ guard inside of maybeDeactivateAudioSession so it isn't spread throughout the file.
+ (WebCore::PlatformMediaSessionManager::sessionWillBeginPlayback): Ditto.
+ (WebCore::PlatformMediaSessionManager::processWillSuspend): Ditto.
+ (WebCore::PlatformMediaSessionManager::processDidResume): Ditto.
+ (WebCore::PlatformMediaSessionManager::sessionCanProduceAudioChanged): Add logging,
+ call `maybeActivateAudioSession()` so we activate the audio session if necessary.
+ (WebCore::PlatformMediaSessionManager::addAudioCaptureSource): Call updateSessionState
+ instead of scheduleUpdateSessionState so the audio session category is updated
+ immediately.
+ (WebCore::PlatformMediaSessionManager::maybeDeactivateAudioSession): Move
+ `#if USE(AUDIO_SESSION)` into the function so it doesn't need to be spread
+ throughout the file.
+ (WebCore::PlatformMediaSessionManager::maybeActivateAudioSession): Ditto.
+ * platform/audio/PlatformMediaSessionManager.h:
+ (WebCore::PlatformMediaSessionManager::isApplicationInBackground const):
+
+ * platform/audio/ios/AudioSessionIOS.mm:
+ (WebCore::AudioSessionIOS::setPreferredBufferSize): Log an error if we are unable
+ to set the preferred buffer size.
+
+ * platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h:
+ * platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm:
+ (WebCore::AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID):
+ New, set the preferred input so capture will use select the device we want.
+ (WebCore::AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices): Remove
+ m_recomputeDevices, `setAudioCaptureDevices` has been restructured so we don't need it.
+ (WebCore::AVAudioSessionCaptureDeviceManager::computeCaptureDevices): Ditto.
+ (WebCore::AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices): Don't update
+ the list of capture devices when the default device changes, only when a device is
+ added, removed, enabled, or disabled.
+
+ * platform/mediastream/mac/CoreAudioCaptureSource.cpp:
+ (WebCore::CoreAudioSharedUnit::setCaptureDevice): Call `setPreferredAudioSessionDeviceUID`
+ so the correct device is selected.
+ (WebCore::CoreAudioSharedUnit::cleanupAudioUnit): Clear m_persistentID.
+ (WebCore::CoreAudioCaptureSource::create): Return an error with a string, or the
+ web process can detect a failure.
+ (WebCore::CoreAudioCaptureSource::stopProducingData): Add logging.
+
+ Source/WebKit:
+
+ * UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: Re
+ (WebKit::UserMediaCaptureManagerProxy::SourceProxy::audioUnitWillStart): Delete,
+ we don't need it now that the web process configures the audio session before
+ capture begins.
+
+
+ git-svn-id: https://svn.webkit.org/repository/webkit/trunk@280702 268f45cc-cd09-0410-ab3c-d52691b4dbfc
+
+ 2021-08-05 Eric Carlson <eric.carl...@apple.com>
+
+ [iOS] getUserMedia sometimes doesn't capture from specified microphone
+ https://bugs.webkit.org/show_bug.cgi?id=228753
+ rdar://79704226
+
+ Reviewed by Youenn Fablet.
+
+ * UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: Re
+ (WebKit::UserMediaCaptureManagerProxy::SourceProxy::audioUnitWillStart): Delete,
+ we don't need it now that the web process configures the audio session before
+ capture begins.
+
2021-08-05 Russell Epstein <repst...@apple.com>
Cherry-pick r280652. rdar://problem/81568994
Modified: branches/safari-612.1.27.0-branch/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp (280723 => 280724)
--- branches/safari-612.1.27.0-branch/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp 2021-08-06 15:09:20 UTC (rev 280723)
+++ branches/safari-612.1.27.0-branch/Source/WebKit/UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp 2021-08-06 16:03:29 UTC (rev 280724)
@@ -100,15 +100,6 @@
CAAudioStreamDescription& description() { return m_description; }
int64_t numberOfFrames() { return m_numberOfFrames; }
- void audioUnitWillStart() final
- {
- // FIXME: WebProcess might want to set the category/bufferSize itself, in which case we should remove that code.
- auto bufferSize = AudioSession::sharedSession().sampleRate() / 50;
- if (AudioSession::sharedSession().preferredBufferSize() > bufferSize)
- AudioSession::sharedSession().setPreferredBufferSize(bufferSize);
- AudioSession::sharedSession().setCategory(AudioSession::CategoryType::PlayAndRecord, RouteSharingPolicy::Default);
- }
-
void start()
{
m_shouldReset = true;