mbox series

[RFC,v2,0/5] Apple Macs machine/platform ASoC driver

Message ID 20220606191910.16580-1-povik+lin@cutebit.org
Headers show
Series Apple Macs machine/platform ASoC driver | expand

Message

Martin Povišer June 6, 2022, 7:19 p.m. UTC
Hi all,

This is again RFC with a machine-level ASoC driver for recent Apple Macs
with the M1 line of chips. This time I attached the platform driver too
for good measure. What I am interested in the most is checking the overall
approach, especially on two points (both in some ways already discussed
in previous RFC [0]):

 - The way the platform/machine driver handles the fact that multiple I2S
   ports (now backend DAIs) can be driven by/connected to the same SERDES
   unit (now in effect a frontend DAI). After previous discussion I have
   transitioned to DPCM to model this. I took the opportunity of dynamic
   backend/frontend routing to support speakers/headphones runtime
   switching. More on this in comments at top of the machine and platform
   driver.

 - The way the machine driver deactivates some of the controls where
   suitable, and limits volume on others. I added a new ASoC card method
   to that end.

Kind regards,
Martin

[0] https://lore.kernel.org/linux-devicetree/20220331000449.41062-1-povik+lin@cutebit.org/

Martin Povišer (5):
  dt-bindings: sound: Add Apple MCA I2S transceiver
  dt-bindings: sound: Add Apple Macs sound peripherals
  ASoC: apple: Add MCA platform driver for Apple SoCs
  ASoC: Introduce 'fixup_controls' card method
  ASoC: apple: Add macaudio machine driver

 .../bindings/sound/apple,macaudio.yaml        |  157 +++
 .../devicetree/bindings/sound/apple,mca.yaml  |  102 ++
 include/sound/soc-card.h                      |    1 +
 include/sound/soc.h                           |    1 +
 sound/soc/Kconfig                             |    1 +
 sound/soc/Makefile                            |    1 +
 sound/soc/apple/Kconfig                       |   25 +
 sound/soc/apple/Makefile                      |    5 +
 sound/soc/apple/macaudio.c                    | 1004 +++++++++++++++
 sound/soc/apple/mca.c                         | 1122 +++++++++++++++++
 sound/soc/soc-card.c                          |    6 +
 sound/soc/soc-core.c                          |    1 +
 12 files changed, 2426 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/sound/apple,macaudio.yaml
 create mode 100644 Documentation/devicetree/bindings/sound/apple,mca.yaml
 create mode 100644 sound/soc/apple/Kconfig
 create mode 100644 sound/soc/apple/Makefile
 create mode 100644 sound/soc/apple/macaudio.c
 create mode 100644 sound/soc/apple/mca.c

Comments

Pierre-Louis Bossart June 6, 2022, 8:02 p.m. UTC | #1
> + * Virtual FE/BE Playback Topology
> + * -------------------------------
> + *
> + * The platform driver has independent frontend and backend DAIs with the
> + * option of routing backends to any of the frontends. The platform
> + * driver configures the routing based on DPCM couplings in ASoC runtime
> + * structures, which in turn is determined from DAPM paths by ASoC. But the
> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
> + * the machine driver to fill in. The filled-in virtual topology can be
> + * anything as long as a particular backend isn't connected to more than one
> + * frontend at any given time. (The limitation is due to the unsupported case
> + * of reparenting of live BEs.)
> + *
> + * The DAPM routing that this machine-level driver makes up has two use-cases
> + * in mind:
> + *
> + * - Using a single PCM for playback such that it conditionally sinks to either
> + *   speakers or headphones based on the plug-in state of the headphones jack.
> + *   All the while making the switch transparent to userspace. This has the
> + *   drawback of requiring a sample stream suited for both speakers and
> + *   headphones, which is hard to come by on machines where tailored DSP for
> + *   speakers in userspace is desirable or required.
> + *
> + * - Driving the headphones and speakers from distinct PCMs, having userspace
> + *   bridge the difference and apply different signal processing to the two.
> + *
> + * In the end the topology supplied by this driver looks like this:
> + *
> + *  PCMs (frontends)                   I2S Port Groups (backends)
> + *  ────────────────                   ──────────────────────────
> + *
> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
> + *                ┌─── │ ───┘             ▲
> + *  ┌──────────┐  │    │                  │
> + *  │Secondary ├──┘    │     ┌────────────┴┐
> + *  └──────────┘       ├────►│Plug-in Demux│
> + *                     │     └────────────┬┘
> + *                     │                  │
> + *                     │                  ▼
> + *                     │                 ┌─────┐     ┌──────────┐
> + *                     └───────────────► │ Mux │ ──► │Headphones│
> + *                                       └─────┘     └──────────┘
> + */

In Patch2, the 'clusters' are described as front-ends, with I2S as
back-ends. Here the PCMs are described as front-ends, but there's no
mention of clusters. Either one of the two descriptions is outdated, or
there's something missing to help reconcile the two pieces of information?


> +static int macaudio_copy_link(struct device *dev, struct snd_soc_dai_link *target,
> +			       struct snd_soc_dai_link *source)
> +{
> +	memcpy(target, source, sizeof(struct snd_soc_dai_link));
> +
> +	target->cpus = devm_kcalloc(dev, target->num_cpus,
> +				sizeof(*target->cpus), GFP_KERNEL);
> +	target->codecs = devm_kcalloc(dev, target->num_codecs,
> +				sizeof(*target->codecs), GFP_KERNEL);
> +	target->platforms = devm_kcalloc(dev, target->num_platforms,
> +				sizeof(*target->platforms), GFP_KERNEL);
> +
> +	if (!target->cpus || !target->codecs || !target->platforms)
> +		return -ENOMEM;
> +
> +	memcpy(target->cpus, source->cpus, sizeof(*target->cpus) * target->num_cpus);
> +	memcpy(target->codecs, source->codecs, sizeof(*target->codecs) * target->num_codecs);
> +	memcpy(target->platforms, source->platforms, sizeof(*target->platforms) * target->num_platforms);


use devm_kmemdup?

> +
> +	return 0;
> +}

> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
> +{
> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
> +	struct snd_soc_dpcm *dpcm;
> +
> +	/*
> +	 * If this is a FE, look it up in link_props directly.
> +	 * If this is a BE, look it up in the respective FE.
> +	 */
> +	if (!rtd->dai_link->no_pcm)
> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
> +
> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
> +		int fe_id = dpcm->fe->dai_link->id;
> +
> +		return ma->link_props[fe_id].mclk_fs;
> +	}

I am not sure what the concept of mclk would mean for a front-end? This
is typically very I2S-specific, i.e. a back-end property, no?

> +
> +	return 0;
> +}
> +
> +static int macaudio_dpcm_hw_params(struct snd_pcm_substream *substream,
> +				   struct snd_pcm_hw_params *params)
> +{
> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
> +	struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
> +	int mclk_fs = macaudio_get_runtime_mclk_fs(substream);
> +	int i;
> +
> +	if (mclk_fs) {
> +		struct snd_soc_dai *dai;
> +		int mclk = params_rate(params) * mclk_fs;
> +
> +		for_each_rtd_codec_dais(rtd, i, dai)
> +			snd_soc_dai_set_sysclk(dai, 0, mclk, SND_SOC_CLOCK_IN);
> +
> +		snd_soc_dai_set_sysclk(cpu_dai, 0, mclk, SND_SOC_CLOCK_OUT);
> +	}
> +
> +	return 0;
> +}
> +
> +static void macaudio_dpcm_shutdown(struct snd_pcm_substream *substream)
> +{
> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
> +	struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
> +	struct snd_soc_dai *dai;
> +	int mclk_fs = macaudio_get_runtime_mclk_fs(substream);
> +	int i;
> +
> +	if (mclk_fs) {
> +		for_each_rtd_codec_dais(rtd, i, dai)
> +			snd_soc_dai_set_sysclk(dai, 0, 0, SND_SOC_CLOCK_IN);
> +
> +		snd_soc_dai_set_sysclk(cpu_dai, 0, 0, SND_SOC_CLOCK_OUT);
> +	}
> +}
> +
> +static const struct snd_soc_ops macaudio_fe_ops = {
> +	.shutdown	= macaudio_dpcm_shutdown,
> +	.hw_params	= macaudio_dpcm_hw_params,
> +};
> +
> +static const struct snd_soc_ops macaudio_be_ops = {
> +	.shutdown	= macaudio_dpcm_shutdown,
> +	.hw_params	= macaudio_dpcm_hw_params,
> +};
> +
> +static int macaudio_be_assign_tdm(struct snd_soc_pcm_runtime *rtd)
> +{
> +	struct snd_soc_card *card = rtd->card;
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
> +	struct snd_soc_dai *dai;
> +	unsigned int mask;
> +	int nslots, ret, i;
> +
> +	if (!props->tdm_mask)
> +		return 0;
> +
> +	mask = props->tdm_mask;
> +	nslots = __fls(mask) + 1;
> +
> +	if (rtd->num_codecs == 1) {
> +		ret = snd_soc_dai_set_tdm_slot(asoc_rtd_to_codec(rtd, 0), mask,
> +					       0, nslots, MACAUDIO_SLOTWIDTH);
> +
> +		/*
> +		 * Headphones get a pass on -EOPNOTSUPP (see the comment
> +		 * around mclk_fs value for primary FE).
> +		 */
> +		if (ret == -EOPNOTSUPP && props->is_headphones)
> +			return 0;
> +
> +		return ret;
> +	}
> +
> +	for_each_rtd_codec_dais(rtd, i, dai) {
> +		int slot = __ffs(mask);
> +
> +		mask &= ~(1 << slot);
> +		ret = snd_soc_dai_set_tdm_slot(dai, 1 << slot, 0, nslots,
> +					       MACAUDIO_SLOTWIDTH);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
> +{
> +	struct snd_soc_card *card = rtd->card;
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
> +	struct snd_soc_dai *dai;
> +	int i, ret;
> +
> +	ret = macaudio_be_assign_tdm(rtd);
> +	if (ret < 0)
> +		return ret;
> +
> +	if (props->is_headphones) {
> +		for_each_rtd_codec_dais(rtd, i, dai)
> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
> +	}

this is weird, set_jack() is invoked by the ASoC core. You shouldn't
need to do this?

> +
> +	return 0;
> +}
> +
> +static void macaudio_be_exit(struct snd_soc_pcm_runtime *rtd)
> +{
> +	struct snd_soc_card *card = rtd->card;
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
> +	struct snd_soc_dai *dai;
> +	int i;
> +
> +	if (props->is_headphones) {
> +		for_each_rtd_codec_dais(rtd, i, dai)
> +			snd_soc_component_set_jack(dai->component, NULL, NULL);
> +	}

same, why is this needed?

> +}
> +
> +static int macaudio_fe_init(struct snd_soc_pcm_runtime *rtd)
> +{
> +	struct snd_soc_card *card = rtd->card;
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
> +	int nslots = props->mclk_fs / MACAUDIO_SLOTWIDTH;
> +
> +	return snd_soc_dai_set_tdm_slot(asoc_rtd_to_cpu(rtd, 0), (1 << nslots) - 1,
> +					(1 << nslots) - 1, nslots, MACAUDIO_SLOTWIDTH);
> +}
> +
> +
> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
> +				void *data);
> +
> +static struct notifier_block macaudio_jack_nb = {
> +	.notifier_call = macaudio_jack_event,
> +};

why is this needed? we have already many ways of dealing with the jack
events (dare I say too many ways?).
> +
> +static int macaudio_probe(struct snd_soc_card *card)
> +{
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
> +	int ret;
> +
> +	ma->pin.pin = "Headphones";
> +	ma->pin.mask = SND_JACK_HEADSET | SND_JACK_HEADPHONE;
> +	ret = snd_soc_card_jack_new(card, ma->pin.pin,
> +			SND_JACK_HEADSET |
> +			SND_JACK_HEADPHONE |
> +			SND_JACK_BTN_0 | SND_JACK_BTN_1 |
> +			SND_JACK_BTN_2 | SND_JACK_BTN_3,
> +			&ma->jack, &ma->pin, 1);
> +
> +	if (ret < 0) {
> +		dev_err(card->dev, "jack creation failed: %d\n", ret);
> +		return ret;
> +	}
> +
> +	snd_soc_jack_notifier_register(&ma->jack, &macaudio_jack_nb);
> +
> +	return ret;
> +}
> +
> +static int macaudio_add_backend_dai_route(struct snd_soc_card *card, struct snd_soc_dai *dai,
> +					  bool is_speakers)
> +{
> +	struct snd_soc_dapm_route routes[2];
> +	int nroutes;
> +	int ret;

newline?

> +	memset(routes, 0, sizeof(routes));
> +
> +	dev_dbg(card->dev, "adding routes for '%s'\n", dai->name);
> +
> +	if (is_speakers)
> +		routes[0].source = "Speakers Playback";
> +	else
> +		routes[0].source = "Headphones Playback";
> +	routes[0].sink = dai->playback_widget->name;
> +	nroutes = 1;
> +
> +	if (!is_speakers) {
> +		routes[1].source = dai->capture_widget->name;
> +		routes[1].sink = "Headphones Capture";
> +		nroutes = 2;
> +	}
> +
> +	ret = snd_soc_dapm_add_routes(&card->dapm, routes, nroutes);
> +	if (ret)
> +		dev_err(card->dev, "failed adding dynamic DAPM routes for %s\n",
> +			dai->name);
> +	return ret;
> +}
> +
> +static bool macaudio_match_kctl_name(const char *pattern, const char *name)
> +{
> +	if (pattern[0] == '*') {
> +		int namelen, patternlen;
> +
> +		pattern++;
> +		if (pattern[0] == ' ')
> +			pattern++;
> +
> +		namelen = strlen(name);
> +		patternlen = strlen(pattern);
> +
> +		if (namelen > patternlen)
> +			name += (namelen - patternlen);
> +	}
> +
> +	return !strcmp(name, pattern);
> +}
> +
> +static int macaudio_limit_volume(struct snd_soc_card *card,
> +				 const char *pattern, int max)
> +{
> +	struct snd_kcontrol *kctl;
> +	struct soc_mixer_control *mc;
> +	int found = 0;
> +
> +	list_for_each_entry(kctl, &card->snd_card->controls, list) {
> +		if (!macaudio_match_kctl_name(pattern, kctl->id.name))
> +			continue;
> +
> +		found++;
> +		dev_dbg(card->dev, "limiting volume on '%s'\n", kctl->id.name);
> +
> +		/*
> +		 * TODO: This doesn't decrease the volume if it's already
> +		 * above the limit!
> +		 */
> +		mc = (struct soc_mixer_control *)kctl->private_value;
> +		if (max <= mc->max)
> +			mc->platform_max = max;
> +
> +	}
> +
> +	return found;
> +}
> +
> +static int macaudio_late_probe(struct snd_soc_card *card)
> +{
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
> +	struct snd_soc_pcm_runtime *rtd;
> +	struct snd_soc_dai *dai;
> +	int ret, i;
> +
> +	/* Add the dynamic DAPM routes */
> +	for_each_card_rtds(card, rtd) {
> +		struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
> +
> +		if (!rtd->dai_link->no_pcm)
> +			continue;
> +
> +		for_each_rtd_cpu_dais(rtd, i, dai) {
> +			ret = macaudio_add_backend_dai_route(card, dai, props->is_speakers);
> +
> +			if (ret)
> +				return ret;
> +		}
> +	}
> +
> +	if (!ma->mdata) {
> +		dev_err(card->dev, "driver doesn't know speaker limits for this model\n");
> +		return void_warranty ? 0 : -EINVAL;
> +	}
> +
> +	macaudio_limit_volume(card, "* Amp Gain", ma->mdata->spk_amp_gain_max);
> +	return 0;
> +}
> +
> +static const char * const macaudio_plugin_demux_texts[] = {
> +	"Speakers",
> +	"Headphones"
> +};
> +
> +SOC_ENUM_SINGLE_VIRT_DECL(macaudio_plugin_demux_enum, macaudio_plugin_demux_texts);
> +
> +static int macaudio_plugin_demux_get(struct snd_kcontrol *kcontrol,
> +			struct snd_ctl_elem_value *ucontrol)
> +{
> +	struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(dapm->card);
> +
> +	/*
> +	 * TODO: Determine what locking is in order here...
> +	 */
> +	ucontrol->value.enumerated.item[0] = ma->jack_plugin_state;
> +
> +	return 0;
> +}
> +
> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
> +				void *data)
> +{
> +	struct snd_soc_jack *jack = data;
> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
> +
> +	ma->jack_plugin_state = !!event;
> +
> +	if (!ma->plugin_demux_kcontrol)
> +		return 0;
> +
> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
> +				      ma->jack_plugin_state,
> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);

the term 'plugin' can be understood in many ways by different audio
folks. 'plugin' is usually the term used for processing libraries (VST,
LADSPA, etc). I think here you meant 'jack control'?

> +
> +	return 0;
> +}
> +
Mark Brown June 6, 2022, 8:17 p.m. UTC | #2
On Mon, Jun 06, 2022 at 09:19:08PM +0200, Martin Povišer wrote:

> +++ b/sound/soc/apple/mca.c
> @@ -0,0 +1,1122 @@
> +/*
> + * Apple SoCs MCA driver

Please add SPDX headers to all your files.

> +		mca_modify(cl, serdes_conf,
> +			SERDES_CONF_SOME_RST, SERDES_CONF_SOME_RST);
> +		(void) readl_relaxed(cl->base + serdes_conf);

Please drop the cast, casts to/from void are generally a warning sign as
they're unneeded in C.  If you want to document the barrier use a
comment or wrapper function.

> +	/*
> +	 * Codecs require clocks at time of umute with the 'mute_stream' op.
> +	 * We need to enable them here at the latest (frontend prepare would
> +	 * be too late).
> +	 */
> +	if (!mca_fe_clocks_in_use(fe_cl)) {
> +		ret = mca_fe_enable_clocks(fe_cl);
> +		if (ret < 0)
> +			return ret;
> +	}

This requirement is CODEC specific.  It's fine to bodge around to
satisfy it though, especially given the restricted set of platforms this
can be used with.

> +	fe_cl = &mca->clusters[cl->port_driver];
> +	if (!mca_fe_clocks_in_use(fe_cl))
> +		return 0; /* Nothing to do */
> +
> +	cl->clocks_in_use[substream->stream] = false;
> +
> +	if (!mca_fe_clocks_in_use(fe_cl))
> +		mca_fe_disable_clocks(fe_cl);

Are you sure this doesn't need locking?
Martin Povišer June 6, 2022, 8:35 p.m. UTC | #3
> On 6. 6. 2022, at 22:17, Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Jun 06, 2022 at 09:19:08PM +0200, Martin Povišer wrote:
> 
>> +++ b/sound/soc/apple/mca.c
>> @@ -0,0 +1,1122 @@
>> +/*
>> + * Apple SoCs MCA driver
> 
> Please add SPDX headers to all your files.
> 
>> +		mca_modify(cl, serdes_conf,
>> +			SERDES_CONF_SOME_RST, SERDES_CONF_SOME_RST);
>> +		(void) readl_relaxed(cl->base + serdes_conf);
> 
> Please drop the cast, casts to/from void are generally a warning sign as
> they're unneeded in C.  If you want to document the barrier use a
> comment or wrapper function.
> 
>> +	/*
>> +	 * Codecs require clocks at time of umute with the 'mute_stream' op.
>> +	 * We need to enable them here at the latest (frontend prepare would
>> +	 * be too late).
>> +	 */
>> +	if (!mca_fe_clocks_in_use(fe_cl)) {
>> +		ret = mca_fe_enable_clocks(fe_cl);
>> +		if (ret < 0)
>> +			return ret;
>> +	}
> 
> This requirement is CODEC specific.  It's fine to bodge around to
> satisfy it though, especially given the restricted set of platforms this
> can be used with.
> 
>> +	fe_cl = &mca->clusters[cl->port_driver];
>> +	if (!mca_fe_clocks_in_use(fe_cl))
>> +		return 0; /* Nothing to do */
>> +
>> +	cl->clocks_in_use[substream->stream] = false;
>> +
>> +	if (!mca_fe_clocks_in_use(fe_cl))
>> +		mca_fe_disable_clocks(fe_cl);
> 
> Are you sure this doesn't need locking?

I am not sure. I need to study what locking is already done by ALSA/ASoC.
I assume the two stream directions here don’t share a lock already...
Martin Povišer June 6, 2022, 8:46 p.m. UTC | #4
(I am having trouble delivering mail to linux.intel.com, so I reply to the list
and CC at least...)

> On 6. 6. 2022, at 22:02, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
> 
> 
>> + * Virtual FE/BE Playback Topology
>> + * -------------------------------
>> + *
>> + * The platform driver has independent frontend and backend DAIs with the
>> + * option of routing backends to any of the frontends. The platform
>> + * driver configures the routing based on DPCM couplings in ASoC runtime
>> + * structures, which in turn is determined from DAPM paths by ASoC. But the
>> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
>> + * the machine driver to fill in. The filled-in virtual topology can be
>> + * anything as long as a particular backend isn't connected to more than one
>> + * frontend at any given time. (The limitation is due to the unsupported case
>> + * of reparenting of live BEs.)
>> + *
>> + * The DAPM routing that this machine-level driver makes up has two use-cases
>> + * in mind:
>> + *
>> + * - Using a single PCM for playback such that it conditionally sinks to either
>> + *   speakers or headphones based on the plug-in state of the headphones jack.
>> + *   All the while making the switch transparent to userspace. This has the
>> + *   drawback of requiring a sample stream suited for both speakers and
>> + *   headphones, which is hard to come by on machines where tailored DSP for
>> + *   speakers in userspace is desirable or required.
>> + *
>> + * - Driving the headphones and speakers from distinct PCMs, having userspace
>> + *   bridge the difference and apply different signal processing to the two.
>> + *
>> + * In the end the topology supplied by this driver looks like this:
>> + *
>> + *  PCMs (frontends)                   I2S Port Groups (backends)
>> + *  ────────────────                   ──────────────────────────
>> + *
>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>> + *                ┌─── │ ───┘             ▲
>> + *  ┌──────────┐  │    │                  │
>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>> + *  └──────────┘       ├────►│Plug-in Demux│
>> + *                     │     └────────────┬┘
>> + *                     │                  │
>> + *                     │                  ▼
>> + *                     │                 ┌─────┐     ┌──────────┐
>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>> + *                                       └─────┘     └──────────┘
>> + */
> 
> In Patch2, the 'clusters' are described as front-ends, with I2S as
> back-ends. Here the PCMs are described as front-ends, but there's no
> mention of clusters. Either one of the two descriptions is outdated, or
> there's something missing to help reconcile the two pieces of information?

Both descriptions should be in sync. Maybe I don’t know the proper
terminology. In both cases the frontend is meant to be the actual I2S
transceiver unit, and backend the I2S port on the SoC’s periphery,
which can be routed to any of transceiver units. (Multiple ports can
be routed to the same unit, which means the ports will have the same
clocks and data line -- that's a configuration we need to support to
drive some of the speaker arrays, hence the backend/frontend
distinction).

Maybe I am using 'PCM' in a confusing way here? What I meant is a
subdevice that’s visible from userspace, because I have seen it used
that way in ALSA codebase.

>> +static int macaudio_copy_link(struct device *dev, struct snd_soc_dai_link *target,
>> +			       struct snd_soc_dai_link *source)
>> +{
>> +	memcpy(target, source, sizeof(struct snd_soc_dai_link));
>> +
>> +	target->cpus = devm_kcalloc(dev, target->num_cpus,
>> +				sizeof(*target->cpus), GFP_KERNEL);
>> +	target->codecs = devm_kcalloc(dev, target->num_codecs,
>> +				sizeof(*target->codecs), GFP_KERNEL);
>> +	target->platforms = devm_kcalloc(dev, target->num_platforms,
>> +				sizeof(*target->platforms), GFP_KERNEL);
>> +
>> +	if (!target->cpus || !target->codecs || !target->platforms)
>> +		return -ENOMEM;
>> +
>> +	memcpy(target->cpus, source->cpus, sizeof(*target->cpus) * target->num_cpus);
>> +	memcpy(target->codecs, source->codecs, sizeof(*target->codecs) * target->num_codecs);
>> +	memcpy(target->platforms, source->platforms, sizeof(*target->platforms) * target->num_platforms);
> 
> 
> use devm_kmemdup?

Looks like what I am looking for.

>> +
>> +	return 0;
>> +}
> 
>> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
>> +{
>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
>> +	struct snd_soc_dpcm *dpcm;
>> +
>> +	/*
>> +	 * If this is a FE, look it up in link_props directly.
>> +	 * If this is a BE, look it up in the respective FE.
>> +	 */
>> +	if (!rtd->dai_link->no_pcm)
>> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
>> +
>> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
>> +		int fe_id = dpcm->fe->dai_link->id;
>> +
>> +		return ma->link_props[fe_id].mclk_fs;
>> +	}
> 
> I am not sure what the concept of mclk would mean for a front-end? This
> is typically very I2S-specific, i.e. a back-end property, no?

Right, that’s a result of the confusion from above. Hope I cleared it up
somehow. The frontend already decides the clocks and data serialization,
hence mclk/fs is a frontend-prop here.

>> +
>> +	return 0;
>> +}
>> +
>> +static int macaudio_dpcm_hw_params(struct snd_pcm_substream *substream,
>> +				   struct snd_pcm_hw_params *params)
>> +{
>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>> +	struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
>> +	int mclk_fs = macaudio_get_runtime_mclk_fs(substream);
>> +	int i;
>> +
>> +	if (mclk_fs) {
>> +		struct snd_soc_dai *dai;
>> +		int mclk = params_rate(params) * mclk_fs;
>> +
>> +		for_each_rtd_codec_dais(rtd, i, dai)
>> +			snd_soc_dai_set_sysclk(dai, 0, mclk, SND_SOC_CLOCK_IN);
>> +
>> +		snd_soc_dai_set_sysclk(cpu_dai, 0, mclk, SND_SOC_CLOCK_OUT);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static void macaudio_dpcm_shutdown(struct snd_pcm_substream *substream)
>> +{
>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>> +	struct snd_soc_dai *cpu_dai = asoc_rtd_to_cpu(rtd, 0);
>> +	struct snd_soc_dai *dai;
>> +	int mclk_fs = macaudio_get_runtime_mclk_fs(substream);
>> +	int i;
>> +
>> +	if (mclk_fs) {
>> +		for_each_rtd_codec_dais(rtd, i, dai)
>> +			snd_soc_dai_set_sysclk(dai, 0, 0, SND_SOC_CLOCK_IN);
>> +
>> +		snd_soc_dai_set_sysclk(cpu_dai, 0, 0, SND_SOC_CLOCK_OUT);
>> +	}
>> +}
>> +
>> +static const struct snd_soc_ops macaudio_fe_ops = {
>> +	.shutdown	= macaudio_dpcm_shutdown,
>> +	.hw_params	= macaudio_dpcm_hw_params,
>> +};
>> +
>> +static const struct snd_soc_ops macaudio_be_ops = {
>> +	.shutdown	= macaudio_dpcm_shutdown,
>> +	.hw_params	= macaudio_dpcm_hw_params,
>> +};
>> +
>> +static int macaudio_be_assign_tdm(struct snd_soc_pcm_runtime *rtd)
>> +{
>> +	struct snd_soc_card *card = rtd->card;
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>> +	struct snd_soc_dai *dai;
>> +	unsigned int mask;
>> +	int nslots, ret, i;
>> +
>> +	if (!props->tdm_mask)
>> +		return 0;
>> +
>> +	mask = props->tdm_mask;
>> +	nslots = __fls(mask) + 1;
>> +
>> +	if (rtd->num_codecs == 1) {
>> +		ret = snd_soc_dai_set_tdm_slot(asoc_rtd_to_codec(rtd, 0), mask,
>> +					       0, nslots, MACAUDIO_SLOTWIDTH);
>> +
>> +		/*
>> +		 * Headphones get a pass on -EOPNOTSUPP (see the comment
>> +		 * around mclk_fs value for primary FE).
>> +		 */
>> +		if (ret == -EOPNOTSUPP && props->is_headphones)
>> +			return 0;
>> +
>> +		return ret;
>> +	}
>> +
>> +	for_each_rtd_codec_dais(rtd, i, dai) {
>> +		int slot = __ffs(mask);
>> +
>> +		mask &= ~(1 << slot);
>> +		ret = snd_soc_dai_set_tdm_slot(dai, 1 << slot, 0, nslots,
>> +					       MACAUDIO_SLOTWIDTH);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
>> +{
>> +	struct snd_soc_card *card = rtd->card;
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>> +	struct snd_soc_dai *dai;
>> +	int i, ret;
>> +
>> +	ret = macaudio_be_assign_tdm(rtd);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	if (props->is_headphones) {
>> +		for_each_rtd_codec_dais(rtd, i, dai)
>> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
>> +	}
> 
> this is weird, set_jack() is invoked by the ASoC core. You shouldn't
> need to do this?

That’s interesting. Where would it be invoked? How does ASoC know which codec
it attaches to?

>> +
>> +	return 0;
>> +}
>> +
>> +static void macaudio_be_exit(struct snd_soc_pcm_runtime *rtd)
>> +{
>> +	struct snd_soc_card *card = rtd->card;
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>> +	struct snd_soc_dai *dai;
>> +	int i;
>> +
>> +	if (props->is_headphones) {
>> +		for_each_rtd_codec_dais(rtd, i, dai)
>> +			snd_soc_component_set_jack(dai->component, NULL, NULL);
>> +	}
> 
> same, why is this needed?
> 
>> +}
>> +
>> +static int macaudio_fe_init(struct snd_soc_pcm_runtime *rtd)
>> +{
>> +	struct snd_soc_card *card = rtd->card;
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>> +	int nslots = props->mclk_fs / MACAUDIO_SLOTWIDTH;
>> +
>> +	return snd_soc_dai_set_tdm_slot(asoc_rtd_to_cpu(rtd, 0), (1 << nslots) - 1,
>> +					(1 << nslots) - 1, nslots, MACAUDIO_SLOTWIDTH);
>> +}
>> +
>> +
>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>> +				void *data);
>> +
>> +static struct notifier_block macaudio_jack_nb = {
>> +	.notifier_call = macaudio_jack_event,
>> +};
> 
> why is this needed? we have already many ways of dealing with the jack
> events (dare I say too many ways?).

Because I want to update the DAPM paths based on the jack status,
specifically I want to set macaudio_plugin_demux. I don’t know how
else it could be done.

>> +
>> +static int macaudio_probe(struct snd_soc_card *card)
>> +{
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>> +	int ret;
>> +
>> +	ma->pin.pin = "Headphones";
>> +	ma->pin.mask = SND_JACK_HEADSET | SND_JACK_HEADPHONE;
>> +	ret = snd_soc_card_jack_new(card, ma->pin.pin,
>> +			SND_JACK_HEADSET |
>> +			SND_JACK_HEADPHONE |
>> +			SND_JACK_BTN_0 | SND_JACK_BTN_1 |
>> +			SND_JACK_BTN_2 | SND_JACK_BTN_3,
>> +			&ma->jack, &ma->pin, 1);
>> +
>> +	if (ret < 0) {
>> +		dev_err(card->dev, "jack creation failed: %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	snd_soc_jack_notifier_register(&ma->jack, &macaudio_jack_nb);
>> +
>> +	return ret;
>> +}
>> +
>> +static int macaudio_add_backend_dai_route(struct snd_soc_card *card, struct snd_soc_dai *dai,
>> +					  bool is_speakers)
>> +{
>> +	struct snd_soc_dapm_route routes[2];
>> +	int nroutes;
>> +	int ret;
> 
> newline?
> 
>> +	memset(routes, 0, sizeof(routes));
>> +
>> +	dev_dbg(card->dev, "adding routes for '%s'\n", dai->name);
>> +
>> +	if (is_speakers)
>> +		routes[0].source = "Speakers Playback";
>> +	else
>> +		routes[0].source = "Headphones Playback";
>> +	routes[0].sink = dai->playback_widget->name;
>> +	nroutes = 1;
>> +
>> +	if (!is_speakers) {
>> +		routes[1].source = dai->capture_widget->name;
>> +		routes[1].sink = "Headphones Capture";
>> +		nroutes = 2;
>> +	}
>> +
>> +	ret = snd_soc_dapm_add_routes(&card->dapm, routes, nroutes);
>> +	if (ret)
>> +		dev_err(card->dev, "failed adding dynamic DAPM routes for %s\n",
>> +			dai->name);
>> +	return ret;
>> +}
>> +
>> +static bool macaudio_match_kctl_name(const char *pattern, const char *name)
>> +{
>> +	if (pattern[0] == '*') {
>> +		int namelen, patternlen;
>> +
>> +		pattern++;
>> +		if (pattern[0] == ' ')
>> +			pattern++;
>> +
>> +		namelen = strlen(name);
>> +		patternlen = strlen(pattern);
>> +
>> +		if (namelen > patternlen)
>> +			name += (namelen - patternlen);
>> +	}
>> +
>> +	return !strcmp(name, pattern);
>> +}
>> +
>> +static int macaudio_limit_volume(struct snd_soc_card *card,
>> +				 const char *pattern, int max)
>> +{
>> +	struct snd_kcontrol *kctl;
>> +	struct soc_mixer_control *mc;
>> +	int found = 0;
>> +
>> +	list_for_each_entry(kctl, &card->snd_card->controls, list) {
>> +		if (!macaudio_match_kctl_name(pattern, kctl->id.name))
>> +			continue;
>> +
>> +		found++;
>> +		dev_dbg(card->dev, "limiting volume on '%s'\n", kctl->id.name);
>> +
>> +		/*
>> +		 * TODO: This doesn't decrease the volume if it's already
>> +		 * above the limit!
>> +		 */
>> +		mc = (struct soc_mixer_control *)kctl->private_value;
>> +		if (max <= mc->max)
>> +			mc->platform_max = max;
>> +
>> +	}
>> +
>> +	return found;
>> +}
>> +
>> +static int macaudio_late_probe(struct snd_soc_card *card)
>> +{
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>> +	struct snd_soc_pcm_runtime *rtd;
>> +	struct snd_soc_dai *dai;
>> +	int ret, i;
>> +
>> +	/* Add the dynamic DAPM routes */
>> +	for_each_card_rtds(card, rtd) {
>> +		struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>> +
>> +		if (!rtd->dai_link->no_pcm)
>> +			continue;
>> +
>> +		for_each_rtd_cpu_dais(rtd, i, dai) {
>> +			ret = macaudio_add_backend_dai_route(card, dai, props->is_speakers);
>> +
>> +			if (ret)
>> +				return ret;
>> +		}
>> +	}
>> +
>> +	if (!ma->mdata) {
>> +		dev_err(card->dev, "driver doesn't know speaker limits for this model\n");
>> +		return void_warranty ? 0 : -EINVAL;
>> +	}
>> +
>> +	macaudio_limit_volume(card, "* Amp Gain", ma->mdata->spk_amp_gain_max);
>> +	return 0;
>> +}
>> +
>> +static const char * const macaudio_plugin_demux_texts[] = {
>> +	"Speakers",
>> +	"Headphones"
>> +};
>> +
>> +SOC_ENUM_SINGLE_VIRT_DECL(macaudio_plugin_demux_enum, macaudio_plugin_demux_texts);
>> +
>> +static int macaudio_plugin_demux_get(struct snd_kcontrol *kcontrol,
>> +			struct snd_ctl_elem_value *ucontrol)
>> +{
>> +	struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol);
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(dapm->card);
>> +
>> +	/*
>> +	 * TODO: Determine what locking is in order here...
>> +	 */
>> +	ucontrol->value.enumerated.item[0] = ma->jack_plugin_state;
>> +
>> +	return 0;
>> +}
>> +
>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>> +				void *data)
>> +{
>> +	struct snd_soc_jack *jack = data;
>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>> +
>> +	ma->jack_plugin_state = !!event;
>> +
>> +	if (!ma->plugin_demux_kcontrol)
>> +		return 0;
>> +
>> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>> +				      ma->jack_plugin_state,
>> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
> 
> the term 'plugin' can be understood in many ways by different audio
> folks. 'plugin' is usually the term used for processing libraries (VST,
> LADSPA, etc). I think here you meant 'jack control'?

So ‘jack control’ would be understood as the jack plugged/unplugged status?

> 
>> +
>> +	return 0;
>> +}
>> +
> 

Martin
Pierre-Louis Bossart June 6, 2022, 9:22 p.m. UTC | #5
On 6/6/22 15:46, Martin Povišer wrote:
> (I am having trouble delivering mail to linux.intel.com, so I reply to the list
> and CC at least...)
> 
>> On 6. 6. 2022, at 22:02, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
>>
>>
>>> + * Virtual FE/BE Playback Topology
>>> + * -------------------------------
>>> + *
>>> + * The platform driver has independent frontend and backend DAIs with the
>>> + * option of routing backends to any of the frontends. The platform
>>> + * driver configures the routing based on DPCM couplings in ASoC runtime
>>> + * structures, which in turn is determined from DAPM paths by ASoC. But the
>>> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
>>> + * the machine driver to fill in. The filled-in virtual topology can be
>>> + * anything as long as a particular backend isn't connected to more than one
>>> + * frontend at any given time. (The limitation is due to the unsupported case
>>> + * of reparenting of live BEs.)
>>> + *
>>> + * The DAPM routing that this machine-level driver makes up has two use-cases
>>> + * in mind:
>>> + *
>>> + * - Using a single PCM for playback such that it conditionally sinks to either
>>> + *   speakers or headphones based on the plug-in state of the headphones jack.
>>> + *   All the while making the switch transparent to userspace. This has the
>>> + *   drawback of requiring a sample stream suited for both speakers and
>>> + *   headphones, which is hard to come by on machines where tailored DSP for
>>> + *   speakers in userspace is desirable or required.
>>> + *
>>> + * - Driving the headphones and speakers from distinct PCMs, having userspace
>>> + *   bridge the difference and apply different signal processing to the two.
>>> + *
>>> + * In the end the topology supplied by this driver looks like this:
>>> + *
>>> + *  PCMs (frontends)                   I2S Port Groups (backends)
>>> + *  ────────────────                   ──────────────────────────
>>> + *
>>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>>> + *                ┌─── │ ───┘             ▲
>>> + *  ┌──────────┐  │    │                  │
>>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>>> + *  └──────────┘       ├────►│Plug-in Demux│
>>> + *                     │     └────────────┬┘
>>> + *                     │                  │
>>> + *                     │                  ▼
>>> + *                     │                 ┌─────┐     ┌──────────┐
>>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>>> + *                                       └─────┘     └──────────┘
>>> + */
>>
>> In Patch2, the 'clusters' are described as front-ends, with I2S as
>> back-ends. Here the PCMs are described as front-ends, but there's no
>> mention of clusters. Either one of the two descriptions is outdated, or
>> there's something missing to help reconcile the two pieces of information?
> 
> Both descriptions should be in sync. Maybe I don’t know the proper
> terminology. In both cases the frontend is meant to be the actual I2S
> transceiver unit, and backend the I2S port on the SoC’s periphery,
> which can be routed to any of transceiver units. (Multiple ports can
> be routed to the same unit, which means the ports will have the same
> clocks and data line -- that's a configuration we need to support to
> drive some of the speaker arrays, hence the backend/frontend
> distinction).
> 
> Maybe I am using 'PCM' in a confusing way here? What I meant is a
> subdevice that’s visible from userspace, because I have seen it used
> that way in ALSA codebase.

Yes, I think most people familiar with DPCM would take the 'PCM
frontend' as some sort of generic DMA transfer from system memory, while
the 'back end' is more the actual serial link. In your case, the
front-end is already very low-level and tied to I2S already. I think
that's fine, it's just that using different terms for 'cluster' and
'PCM' in different patches could lead to confusions.

>>> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
>>> +{
>>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
>>> +	struct snd_soc_dpcm *dpcm;
>>> +
>>> +	/*
>>> +	 * If this is a FE, look it up in link_props directly.
>>> +	 * If this is a BE, look it up in the respective FE.
>>> +	 */
>>> +	if (!rtd->dai_link->no_pcm)
>>> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
>>> +
>>> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
>>> +		int fe_id = dpcm->fe->dai_link->id;
>>> +
>>> +		return ma->link_props[fe_id].mclk_fs;
>>> +	}
>>
>> I am not sure what the concept of mclk would mean for a front-end? This
>> is typically very I2S-specific, i.e. a back-end property, no?
> 
> Right, that’s a result of the confusion from above. Hope I cleared it up
> somehow. The frontend already decides the clocks and data serialization,
> hence mclk/fs is a frontend-prop here.

What confuses me in this code is that it looks possible that the front-
and back-end could have different mclk values? I think a comment is
missing that the values are identical, it's just that there's a
different way to access it depending on the link type?


>>> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
>>> +{
>>> +	struct snd_soc_card *card = rtd->card;
>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>>> +	struct snd_soc_dai *dai;
>>> +	int i, ret;
>>> +
>>> +	ret = macaudio_be_assign_tdm(rtd);
>>> +	if (ret < 0)
>>> +		return ret;
>>> +
>>> +	if (props->is_headphones) {
>>> +		for_each_rtd_codec_dais(rtd, i, dai)
>>> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
>>> +	}
>>
>> this is weird, set_jack() is invoked by the ASoC core. You shouldn't
>> need to do this?
> 
> That’s interesting. Where would it be invoked? How does ASoC know which codec
> it attaches to?

sorry, my comment was partly invalid.

set_jack() is invoked in the machine driver indeed, what I found strange
is that you may have different codecs handling the jack? What is the
purpose of that loop?


>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>> +				void *data);
>>> +
>>> +static struct notifier_block macaudio_jack_nb = {
>>> +	.notifier_call = macaudio_jack_event,
>>> +};
>>
>> why is this needed? we have already many ways of dealing with the jack
>> events (dare I say too many ways?).
> 
> Because I want to update the DAPM paths based on the jack status,
> specifically I want to set macaudio_plugin_demux. I don’t know how
> else it could be done.

I don't know either but I have never seen notifier blocks being used. I
would think there are already ways to do this with DAPM events.


>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>> +				void *data)
>>> +{
>>> +	struct snd_soc_jack *jack = data;
>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>>> +
>>> +	ma->jack_plugin_state = !!event;
>>> +
>>> +	if (!ma->plugin_demux_kcontrol)
>>> +		return 0;
>>> +
>>> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>>> +				      ma->jack_plugin_state,
>>> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
>>
>> the term 'plugin' can be understood in many ways by different audio
>> folks. 'plugin' is usually the term used for processing libraries (VST,
>> LADSPA, etc). I think here you meant 'jack control'?
> 
> So ‘jack control’ would be understood as the jack plugged/unplugged status?

The 'Headphone Jack' or 'Headset Mic Jack' kcontrols typically track the
status. Other terms are 'jack detection'. "plugin" is not a very common
term here.
Martin Povišer June 6, 2022, 9:33 p.m. UTC | #6
> On 6. 6. 2022, at 23:22, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
> 
> On 6/6/22 15:46, Martin Povišer wrote:
>> (I am having trouble delivering mail to linux.intel.com, so I reply to the list
>> and CC at least...)
>> 
>>> On 6. 6. 2022, at 22:02, Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> wrote:
>>> 
>>> 
>>>> + * Virtual FE/BE Playback Topology
>>>> + * -------------------------------
>>>> + *
>>>> + * The platform driver has independent frontend and backend DAIs with the
>>>> + * option of routing backends to any of the frontends. The platform
>>>> + * driver configures the routing based on DPCM couplings in ASoC runtime
>>>> + * structures, which in turn is determined from DAPM paths by ASoC. But the
>>>> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
>>>> + * the machine driver to fill in. The filled-in virtual topology can be
>>>> + * anything as long as a particular backend isn't connected to more than one
>>>> + * frontend at any given time. (The limitation is due to the unsupported case
>>>> + * of reparenting of live BEs.)
>>>> + *
>>>> + * The DAPM routing that this machine-level driver makes up has two use-cases
>>>> + * in mind:
>>>> + *
>>>> + * - Using a single PCM for playback such that it conditionally sinks to either
>>>> + *   speakers or headphones based on the plug-in state of the headphones jack.
>>>> + *   All the while making the switch transparent to userspace. This has the
>>>> + *   drawback of requiring a sample stream suited for both speakers and
>>>> + *   headphones, which is hard to come by on machines where tailored DSP for
>>>> + *   speakers in userspace is desirable or required.
>>>> + *
>>>> + * - Driving the headphones and speakers from distinct PCMs, having userspace
>>>> + *   bridge the difference and apply different signal processing to the two.
>>>> + *
>>>> + * In the end the topology supplied by this driver looks like this:
>>>> + *
>>>> + *  PCMs (frontends)                   I2S Port Groups (backends)
>>>> + *  ────────────────                   ──────────────────────────
>>>> + *
>>>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>>>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>>>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>>>> + *                ┌─── │ ───┘             ▲
>>>> + *  ┌──────────┐  │    │                  │
>>>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>>>> + *  └──────────┘       ├────►│Plug-in Demux│
>>>> + *                     │     └────────────┬┘
>>>> + *                     │                  │
>>>> + *                     │                  ▼
>>>> + *                     │                 ┌─────┐     ┌──────────┐
>>>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>>>> + *                                       └─────┘     └──────────┘
>>>> + */
>>> 
>>> In Patch2, the 'clusters' are described as front-ends, with I2S as
>>> back-ends. Here the PCMs are described as front-ends, but there's no
>>> mention of clusters. Either one of the two descriptions is outdated, or
>>> there's something missing to help reconcile the two pieces of information?
>> 
>> Both descriptions should be in sync. Maybe I don’t know the proper
>> terminology. In both cases the frontend is meant to be the actual I2S
>> transceiver unit, and backend the I2S port on the SoC’s periphery,
>> which can be routed to any of transceiver units. (Multiple ports can
>> be routed to the same unit, which means the ports will have the same
>> clocks and data line -- that's a configuration we need to support to
>> drive some of the speaker arrays, hence the backend/frontend
>> distinction).
>> 
>> Maybe I am using 'PCM' in a confusing way here? What I meant is a
>> subdevice that’s visible from userspace, because I have seen it used
>> that way in ALSA codebase.
> 
> Yes, I think most people familiar with DPCM would take the 'PCM
> frontend' as some sort of generic DMA transfer from system memory, while
> the 'back end' is more the actual serial link. In your case, the
> front-end is already very low-level and tied to I2S already. I think
> that's fine, it's just that using different terms for 'cluster' and
> 'PCM' in different patches could lead to confusions.

OK, will take this into account then.

>>>> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
>>>> +{
>>>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
>>>> +	struct snd_soc_dpcm *dpcm;
>>>> +
>>>> +	/*
>>>> +	 * If this is a FE, look it up in link_props directly.
>>>> +	 * If this is a BE, look it up in the respective FE.
>>>> +	 */
>>>> +	if (!rtd->dai_link->no_pcm)
>>>> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
>>>> +
>>>> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
>>>> +		int fe_id = dpcm->fe->dai_link->id;
>>>> +
>>>> +		return ma->link_props[fe_id].mclk_fs;
>>>> +	}
>>> 
>>> I am not sure what the concept of mclk would mean for a front-end? This
>>> is typically very I2S-specific, i.e. a back-end property, no?
>> 
>> Right, that’s a result of the confusion from above. Hope I cleared it up
>> somehow. The frontend already decides the clocks and data serialization,
>> hence mclk/fs is a frontend-prop here.
> 
> What confuses me in this code is that it looks possible that the front-
> and back-end could have different mclk values? I think a comment is
> missing that the values are identical, it's just that there's a
> different way to access it depending on the link type?

Well, that’s exactly what I am trying to convey by the comment above
in macaudio_get_runtime_mclk_fs. Maybe I should do something to make
it more obvious.

>>>> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
>>>> +{
>>>> +	struct snd_soc_card *card = rtd->card;
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>>>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>>>> +	struct snd_soc_dai *dai;
>>>> +	int i, ret;
>>>> +
>>>> +	ret = macaudio_be_assign_tdm(rtd);
>>>> +	if (ret < 0)
>>>> +		return ret;
>>>> +
>>>> +	if (props->is_headphones) {
>>>> +		for_each_rtd_codec_dais(rtd, i, dai)
>>>> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
>>>> +	}
>>> 
>>> this is weird, set_jack() is invoked by the ASoC core. You shouldn't
>>> need to do this?
>> 
>> That’s interesting. Where would it be invoked? How does ASoC know which codec
>> it attaches to?
> 
> sorry, my comment was partly invalid.
> 
> set_jack() is invoked in the machine driver indeed, what I found strange
> is that you may have different codecs handling the jack? What is the
> purpose of that loop?

There’s no need for the loop, there’s a single codec. OK, will remove the loop
to make it less confusing.

>>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>>> +				void *data);
>>>> +
>>>> +static struct notifier_block macaudio_jack_nb = {
>>>> +	.notifier_call = macaudio_jack_event,
>>>> +};
>>> 
>>> why is this needed? we have already many ways of dealing with the jack
>>> events (dare I say too many ways?).
>> 
>> Because I want to update the DAPM paths based on the jack status,
>> specifically I want to set macaudio_plugin_demux. I don’t know how
>> else it could be done.
> 
> I don't know either but I have never seen notifier blocks being used. I
> would think there are already ways to do this with DAPM events.
> 
> 
>>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>>> +				void *data)
>>>> +{
>>>> +	struct snd_soc_jack *jack = data;
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>>>> +
>>>> +	ma->jack_plugin_state = !!event;
>>>> +
>>>> +	if (!ma->plugin_demux_kcontrol)
>>>> +		return 0;
>>>> +
>>>> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>>>> +				      ma->jack_plugin_state,
>>>> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
>>> 
>>> the term 'plugin' can be understood in many ways by different audio
>>> folks. 'plugin' is usually the term used for processing libraries (VST,
>>> LADSPA, etc). I think here you meant 'jack control'?
>> 
>> So ‘jack control’ would be understood as the jack plugged/unplugged status?
> 
> The 'Headphone Jack' or 'Headset Mic Jack' kcontrols typically track the
> status. Other terms are 'jack detection'. "plugin" is not a very common
> term here.

OK
Mark Brown June 9, 2022, 1:16 p.m. UTC | #7
On Mon, Jun 06, 2022 at 09:19:10PM +0200, Martin Povišer wrote:

> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
> + *                ┌─── │ ───┘             ▲
> + *  ┌──────────┐  │    │                  │
> + *  │Secondary ├──┘    │     ┌────────────┴┐
> + *  └──────────┘       ├────►│Plug-in Demux│
> + *                     │     └────────────┬┘
> + *                     │                  │
> + *                     │                  ▼
> + *                     │                 ┌─────┐     ┌──────────┐
> + *                     └───────────────► │ Mux │ ──► │Headphones│
> + *                                       └─────┘     └──────────┘

As far as I can tell this demux is entirely software based - why not
just expose the routing control to userspace and let it handle
switching (which I suspect may be more featureful than what's
implemented here)?

> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
> +                               void *data)
> +{
> +       struct snd_soc_jack *jack = data;
> +       struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
> +
> +       ma->jack_plugin_state = !!event;
> +
> +       if (!ma->plugin_demux_kcontrol)
> +               return 0;
> +
> +       snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
> +                                     ma->jack_plugin_state,
> +                                     (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
> +
> +       return 0;
> +}

This should be integrated with the core jack detection stuff in
soc-jack.c and/or the core stuff that's wrapping - that way you'll
ensure that events are generated and status readable via all the
interfaces userspace might be looking for.  The ASoC stuff also has some
DAPM integration for turning on/off outputs which might DTRT for you if
you do need it in kernel.
Mark Brown June 9, 2022, 1:33 p.m. UTC | #8
On Mon, Jun 06, 2022 at 09:19:10PM +0200, Martin Povišer wrote:

> +		/*
> +		 * Primary FE
> +		 *
> +		 * The mclk/fs ratio at 64 for the primary frontend is important
> +		 * to ensure that the headphones codec's idea of left and right
> +		 * in a stereo stream over I2S fits in nicely with everyone else's.
> +		 * (This is until the headphones codec's driver supports
> +		 * set_tdm_slot.)
> +		 *
> +		 * The low mclk/fs ratio precludes transmitting more than two
> +		 * channels over I2S, but that's okay since there is the secondary
> +		 * FE for speaker arrays anyway.
> +		 */
> +		.mclk_fs = 64,
> +	},

This seems weird - it looks like it's confusing MCLK and the bit clock
for the audio bus.  These are two different clocks.  Note that it's very
common for devices to require a higher MCLK/fs ratio to deliver the best
audio performance, 256fs is standard.

> +	{
> +		/*
> +		 * Secondary FE
> +		 *
> +		 * Here we want frames plenty long to be able to drive all
> +		 * those fancy speaker arrays.
> +		 */
> +		.mclk_fs = 256,
> +	}

Same thing here - this is at least confusing MCLK and the bit clock.

> +static bool macaudio_match_kctl_name(const char *pattern, const char *name)
> +{
> +	if (pattern[0] == '*') {
> +		int namelen, patternlen;
> +
> +		pattern++;
> +		if (pattern[0] == ' ')
> +			pattern++;
> +
> +		namelen = strlen(name);
> +		patternlen = strlen(pattern);
> +
> +		if (namelen > patternlen)
> +			name += (namelen - patternlen);
> +	}
> +
> +	return !strcmp(name, pattern);
> +}
> +
> +static int macaudio_limit_volume(struct snd_soc_card *card,
> +				 const char *pattern, int max)
> +{
> +	struct snd_kcontrol *kctl;
> +	struct soc_mixer_control *mc;
> +	int found = 0;
> +
> +	list_for_each_entry(kctl, &card->snd_card->controls, list) {
> +		if (!macaudio_match_kctl_name(pattern, kctl->id.name))
> +			continue;
> +
> +		found++;
> +		dev_dbg(card->dev, "limiting volume on '%s'\n", kctl->id.name);
> +
> +		/*
> +		 * TODO: This doesn't decrease the volume if it's already
> +		 * above the limit!
> +		 */
> +		mc = (struct soc_mixer_control *)kctl->private_value;
> +		if (max <= mc->max)
> +			mc->platform_max = max;
> +
> +	}
> +
> +	return found;
> +}

This shouldn't be open coded in a driver, please factor it out into the
core so we've got an API for "set limit X on control Y" then call that.
Martin Povišer June 9, 2022, 1:42 p.m. UTC | #9
> On 9. 6. 2022, at 15:16, Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Jun 06, 2022 at 09:19:10PM +0200, Martin Povišer wrote:
> 
>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>> + *                ┌─── │ ───┘             ▲
>> + *  ┌──────────┐  │    │                  │
>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>> + *  └──────────┘       ├────►│Plug-in Demux│
>> + *                     │     └────────────┬┘
>> + *                     │                  │
>> + *                     │                  ▼
>> + *                     │                 ┌─────┐     ┌──────────┐
>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>> + *                                       └─────┘     └──────────┘
> 
> As far as I can tell this demux is entirely software based - why not
> just expose the routing control to userspace and let it handle
> switching (which I suspect may be more featureful than what's
> implemented here)?

Well, userspace should have the other two muxes at its disposal to
implement any routing/switching it wishes -- but in addition we are
also offering letting kernel take care of the switching, by pointing
the muxes to the demux.

I assume (but I don’t know the extent of what’s possible with UCM files),
that this will be of some value to users running plain ALSA with no
sound server.

>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>> +                               void *data)
>> +{
>> +       struct snd_soc_jack *jack = data;
>> +       struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>> +
>> +       ma->jack_plugin_state = !!event;
>> +
>> +       if (!ma->plugin_demux_kcontrol)
>> +               return 0;
>> +
>> +       snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>> +                                     ma->jack_plugin_state,
>> +                                     (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
>> +
>> +       return 0;
>> +}
> 
> This should be integrated with the core jack detection stuff in
> soc-jack.c and/or the core stuff that's wrapping - that way you'll
> ensure that events are generated and status readable via all the
> interfaces userspace might be looking for.  The ASoC stuff also has some
> DAPM integration for turning on/off outputs which might DTRT for you if
> you do need it in kernel.

Aren’t all the right events to userspace generated already by the
codec calling snd_soc_jack_report?

I looked at the existing DAPM integration but I couldn’t figure out
how to switch the demux with it.
Martin Povišer June 9, 2022, 2:09 p.m. UTC | #10
> On 9. 6. 2022, at 15:33, Mark Brown <broonie@kernel.org> wrote:
> 
> On Mon, Jun 06, 2022 at 09:19:10PM +0200, Martin Povišer wrote:
> 
>> +		/*
>> +		 * Primary FE
>> +		 *
>> +		 * The mclk/fs ratio at 64 for the primary frontend is important
>> +		 * to ensure that the headphones codec's idea of left and right
>> +		 * in a stereo stream over I2S fits in nicely with everyone else's.
>> +		 * (This is until the headphones codec's driver supports
>> +		 * set_tdm_slot.)
>> +		 *
>> +		 * The low mclk/fs ratio precludes transmitting more than two
>> +		 * channels over I2S, but that's okay since there is the secondary
>> +		 * FE for speaker arrays anyway.
>> +		 */
>> +		.mclk_fs = 64,
>> +	},
> 
> This seems weird - it looks like it's confusing MCLK and the bit clock
> for the audio bus.  These are two different clocks.  Note that it's very
> common for devices to require a higher MCLK/fs ratio to deliver the best
> audio performance, 256fs is standard.

On these machines we are not producing any other clock for the codecs
besides the bit clock, so I am using MCLK interchangeably for it. (It is
what the sample rate is derived from after all.)

One of the codec drivers this is to be used with (cs42l42) expects to be
given the I2S bit clock with

  snd_soc_dai_set_sysclk(dai, 0, mclk, SND_SOC_CLOCK_IN);

I can rename mclk to bclk in all of the code to make it clearer maybe.
Also the platform driver can take the bit clock value from set_bclk_ratio,
instead of set_sysclk from where it takes it now. The cs42l42 driver I can
patch too to accept set_bclk_ratio.

>> +	{
>> +		/*
>> +		 * Secondary FE
>> +		 *
>> +		 * Here we want frames plenty long to be able to drive all
>> +		 * those fancy speaker arrays.
>> +		 */
>> +		.mclk_fs = 256,
>> +	}
> 
> Same thing here - this is at least confusing MCLK and the bit clock.
> 
>> +static bool macaudio_match_kctl_name(const char *pattern, const char *name)
>> +{
>> +	if (pattern[0] == '*') {
>> +		int namelen, patternlen;
>> +
>> +		pattern++;
>> +		if (pattern[0] == ' ')
>> +			pattern++;
>> +
>> +		namelen = strlen(name);
>> +		patternlen = strlen(pattern);
>> +
>> +		if (namelen > patternlen)
>> +			name += (namelen - patternlen);
>> +	}
>> +
>> +	return !strcmp(name, pattern);
>> +}
>> +
>> +static int macaudio_limit_volume(struct snd_soc_card *card,
>> +				 const char *pattern, int max)
>> +{
>> +	struct snd_kcontrol *kctl;
>> +	struct soc_mixer_control *mc;
>> +	int found = 0;
>> +
>> +	list_for_each_entry(kctl, &card->snd_card->controls, list) {
>> +		if (!macaudio_match_kctl_name(pattern, kctl->id.name))
>> +			continue;
>> +
>> +		found++;
>> +		dev_dbg(card->dev, "limiting volume on '%s'\n", kctl->id.name);
>> +
>> +		/*
>> +		 * TODO: This doesn't decrease the volume if it's already
>> +		 * above the limit!
>> +		 */
>> +		mc = (struct soc_mixer_control *)kctl->private_value;
>> +		if (max <= mc->max)
>> +			mc->platform_max = max;
>> +
>> +	}
>> +
>> +	return found;
>> +}
> 
> This shouldn't be open coded in a driver, please factor it out into the
> core so we've got an API for "set limit X on control Y" then call that.

There’s already snd_soc_limit_volume, but it takes a fixed control name.
Can I extend it to understand patterns beginning with a wildcard, like
'* Amp Gain Volume’?
Mark Brown June 9, 2022, 3:03 p.m. UTC | #11
On Thu, Jun 09, 2022 at 03:42:09PM +0200, Martin Povišer wrote:
> > On 9. 6. 2022, at 15:16, Mark Brown <broonie@kernel.org> wrote:

> > As far as I can tell this demux is entirely software based - why not
> > just expose the routing control to userspace and let it handle
> > switching (which I suspect may be more featureful than what's
> > implemented here)?

> Well, userspace should have the other two muxes at its disposal to
> implement any routing/switching it wishes -- but in addition we are
> also offering letting kernel take care of the switching, by pointing
> the muxes to the demux.

> I assume (but I don’t know the extent of what’s possible with UCM files),
> that this will be of some value to users running plain ALSA with no
> sound server.

That's basically no userspaces at this point TBH.  I'm not convinced
it's a good idea to be adding custom code for that use case.

> > This should be integrated with the core jack detection stuff in
> > soc-jack.c and/or the core stuff that's wrapping - that way you'll
> > ensure that events are generated and status readable via all the
> > interfaces userspace might be looking for.  The ASoC stuff also has some
> > DAPM integration for turning on/off outputs which might DTRT for you if
> > you do need it in kernel.

> Aren’t all the right events to userspace generated already by the
> codec calling snd_soc_jack_report?

I wasn't able to find any references to snd_soc_jack_report() in your
series?

> I looked at the existing DAPM integration but I couldn’t figure out
> how to switch the demux with it.

Yes, it won't do that.  If you can't stream the same audio to both then
you'd need something else.
Mark Brown June 9, 2022, 3:16 p.m. UTC | #12
On Thu, Jun 09, 2022 at 04:09:57PM +0200, Martin Povišer wrote:
> > On 9. 6. 2022, at 15:33, Mark Brown <broonie@kernel.org> wrote:

> >> +		/*
> >> +		 * Primary FE
> >> +		 *
> >> +		 * The mclk/fs ratio at 64 for the primary frontend is important
> >> +		 * to ensure that the headphones codec's idea of left and right
> >> +		 * in a stereo stream over I2S fits in nicely with everyone else's.
> >> +		 * (This is until the headphones codec's driver supports
> >> +		 * set_tdm_slot.)
> >> +		 *
> >> +		 * The low mclk/fs ratio precludes transmitting more than two
> >> +		 * channels over I2S, but that's okay since there is the secondary
> >> +		 * FE for speaker arrays anyway.
> >> +		 */
> >> +		.mclk_fs = 64,
> >> +	},

> > This seems weird - it looks like it's confusing MCLK and the bit clock
> > for the audio bus.  These are two different clocks.  Note that it's very
> > common for devices to require a higher MCLK/fs ratio to deliver the best
> > audio performance, 256fs is standard.

> On these machines we are not producing any other clock for the codecs
> besides the bit clock, so I am using MCLK interchangeably for it. (It is
> what the sample rate is derived from after all.)

Please don't do this, you're just making everything needlessly hard to
understand by using standard terminology inappropriately and there's a
risk of breakage further down the line with drivers implementing the
standard ops.

> One of the codec drivers this is to be used with (cs42l42) expects to be
> given the I2S bit clock with

>   snd_soc_dai_set_sysclk(dai, 0, mclk, SND_SOC_CLOCK_IN);

That's very, very non-standard...

> I can rename mclk to bclk in all of the code to make it clearer maybe.
> Also the platform driver can take the bit clock value from set_bclk_ratio,
> instead of set_sysclk from where it takes it now. The cs42l42 driver I can
> patch too to accept set_bclk_ratio.

...indeed, set_bclk_ratio() is a better interface for setting the bclk
ratio - the CODEC driver should really be doing that as well.

> > This shouldn't be open coded in a driver, please factor it out into the
> > core so we've got an API for "set limit X on control Y" then call that.

> There’s already snd_soc_limit_volume, but it takes a fixed control name.
> Can I extend it to understand patterns beginning with a wildcard, like
> '* Amp Gain Volume’?

Or add a new call that does that.
Martin Povišer June 9, 2022, 3:27 p.m. UTC | #13
> On 9. 6. 2022, at 17:16, Mark Brown <broonie@kernel.org> wrote:
> 
> On Thu, Jun 09, 2022 at 04:09:57PM +0200, Martin Povišer wrote:
>>> On 9. 6. 2022, at 15:33, Mark Brown <broonie@kernel.org> wrote:
> 
>>>> +		/*
>>>> +		 * Primary FE
>>>> +		 *
>>>> +		 * The mclk/fs ratio at 64 for the primary frontend is important
>>>> +		 * to ensure that the headphones codec's idea of left and right
>>>> +		 * in a stereo stream over I2S fits in nicely with everyone else's.
>>>> +		 * (This is until the headphones codec's driver supports
>>>> +		 * set_tdm_slot.)
>>>> +		 *
>>>> +		 * The low mclk/fs ratio precludes transmitting more than two
>>>> +		 * channels over I2S, but that's okay since there is the secondary
>>>> +		 * FE for speaker arrays anyway.
>>>> +		 */
>>>> +		.mclk_fs = 64,
>>>> +	},
> 
>>> This seems weird - it looks like it's confusing MCLK and the bit clock
>>> for the audio bus.  These are two different clocks.  Note that it's very
>>> common for devices to require a higher MCLK/fs ratio to deliver the best
>>> audio performance, 256fs is standard.
> 
>> On these machines we are not producing any other clock for the codecs
>> besides the bit clock, so I am using MCLK interchangeably for it. (It is
>> what the sample rate is derived from after all.)
> 
> Please don't do this, you're just making everything needlessly hard to
> understand by using standard terminology inappropriately and there's a
> risk of breakage further down the line with drivers implementing the
> standard ops.

OK

>> One of the codec drivers this is to be used with (cs42l42) expects to be
>> given the I2S bit clock with
> 
>>  snd_soc_dai_set_sysclk(dai, 0, mclk, SND_SOC_CLOCK_IN);
> 
> That's very, very non-standard...
> 
>> I can rename mclk to bclk in all of the code to make it clearer maybe.
>> Also the platform driver can take the bit clock value from set_bclk_ratio,
>> instead of set_sysclk from where it takes it now. The cs42l42 driver I can
>> patch too to accept set_bclk_ratio.
> 
> ...indeed, set_bclk_ratio() is a better interface for setting the bclk
> ratio - the CODEC driver should really be doing that as well.

OK, adding that to my TODOs.

>>> This shouldn't be open coded in a driver, please factor it out into the
>>> core so we've got an API for "set limit X on control Y" then call that.
> 
>> There’s already snd_soc_limit_volume, but it takes a fixed control name.
>> Can I extend it to understand patterns beginning with a wildcard, like
>> '* Amp Gain Volume’?
> 
> Or add a new call that does that.

OK
Mark Brown June 9, 2022, 3:53 p.m. UTC | #14
On Mon, Jun 06, 2022 at 09:19:05PM +0200, Martin Povišer wrote:

>  - The way the platform/machine driver handles the fact that multiple I2S
>    ports (now backend DAIs) can be driven by/connected to the same SERDES
>    unit (now in effect a frontend DAI). After previous discussion I have
>    transitioned to DPCM to model this. I took the opportunity of dynamic
>    backend/frontend routing to support speakers/headphones runtime
>    switching. More on this in comments at top of the machine and platform
>    driver.

This looks roughly like I'd expect now, there's some issues from myself
and Pierre but it's more around the edges than anything big picture.
Martin Povišer June 9, 2022, 4:19 p.m. UTC | #15
> On 9. 6. 2022, at 17:50, Mark Brown <broonie@kernel.org> wrote:
> 
> On Thu, Jun 09, 2022 at 05:24:49PM +0200, Martin Povišer wrote:
>>> On 9. 6. 2022, at 17:03, Mark Brown <broonie@kernel.org> wrote:
> 
> Why is this off list?

By accident, added the CC list back with this reply (hopefully it
still attaches to the thread when people receive it).

>>> That's basically no userspaces at this point TBH.  I'm not convinced
>>> it's a good idea to be adding custom code for that use case.
>> 
>> FWIW I know of at least one user of the WIP audio support on Macs who
>> would welcome this feature. My preference is to keep it in, but in
>> the end I guess it’s your call.
> 
> I'd rather not have this open coded in individual drivers, we already
> have an unfortunate abundance of jack detection interfaces.  If we're
> going to add anything I'd rather it were in core code and TBH I'm
> struggling to be enthusiastic.

Noted.

> Can you say anything more about the use case?

I can restate: The alleged use case is running userspace without sound
server, but having playback switch transparently between speakers and
headphones even mid-stream based on jack detection.

>>>> I looked at the existing DAPM integration but I couldn’t figure out
>>>> how to switch the demux with it.
> 
>>> Yes, it won't do that.  If you can't stream the same audio to both then
>>> you'd need something else.
> 
>> I don’t understand what’s meant by streaming the same audio here.
> 
> Playing one audio stream from the host which appears on both speakers
> and headphones - I don't know what the mixing and muxing capabilities of
> the hardware are.
> 
>> Taking a guess: The existing DAPM integration can enable the headphones
>> path based on jack being plugged in, but it can’t disable the speakers
>> path like the demux does?
> 
> No, that works perfectly fine - you can enable or disable pins depending
> on the jack state.

Ah, I peeked into soc-jack.c. What about this then: If I understand what
pins represent, they would be at the remote end of the DAPM paths. So if
for the speakers I add something like

   Headphones Codec Out —> Jack pin

                       +--> Always-on pin
                       |
   Speaker Amp Out -> Mux
                       |
                       +--> Jack inverted pin

and let userspace control the mux, it would in effect support the same
use cases as what I attempted in the code so far. Sounds somewhat right?
Mark Brown June 9, 2022, 4:35 p.m. UTC | #16
On Thu, Jun 09, 2022 at 06:19:37PM +0200, Martin Povišer wrote:
> > On 9. 6. 2022, at 17:50, Mark Brown <broonie@kernel.org> wrote:

> > Can you say anything more about the use case?

> I can restate: The alleged use case is running userspace without sound
> server, but having playback switch transparently between speakers and
> headphones even mid-stream based on jack detection.

Sure, but why?

> > No, that works perfectly fine - you can enable or disable pins depending
> > on the jack state.

> Ah, I peeked into soc-jack.c. What about this then: If I understand what
> pins represent, they would be at the remote end of the DAPM paths. So if
> for the speakers I add something like

>    Headphones Codec Out —> Jack pin
> 
>                        +--> Always-on pin
>                        |
>    Speaker Amp Out -> Mux
>                        |
>                        +--> Jack inverted pin

> and let userspace control the mux, it would in effect support the same
> use cases as what I attempted in the code so far. Sounds somewhat right?

Yes, that should DTRT.  If the mux is working properly with DAPM (not
sure it does with DPCM but ICBW) then you shouldn't even need the jack
integration since the mux would disconnect the unused output when it
gets switched.
Mark Brown June 10, 2022, 3:58 p.m. UTC | #17
On Mon, 6 Jun 2022 21:19:05 +0200, Martin Povišer wrote:
> This is again RFC with a machine-level ASoC driver for recent Apple Macs
> with the M1 line of chips. This time I attached the platform driver too
> for good measure. What I am interested in the most is checking the overall
> approach, especially on two points (both in some ways already discussed
> in previous RFC [0]):
> 
>  - The way the platform/machine driver handles the fact that multiple I2S
>    ports (now backend DAIs) can be driven by/connected to the same SERDES
>    unit (now in effect a frontend DAI). After previous discussion I have
>    transitioned to DPCM to model this. I took the opportunity of dynamic
>    backend/frontend routing to support speakers/headphones runtime
>    switching. More on this in comments at top of the machine and platform
>    driver.
> 
> [...]

Applied to

   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next

Thanks!

[4/5] ASoC: Introduce 'fixup_controls' card method
      commit: df4d27b19b892f464685ea45fa6132dd1a2b6864

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark