mbox series

[0/8] iio: ad9467: support interface tuning

Message ID 20240419-ad9467-new-features-v1-0-3e7628ff6d5e@analog.com
Headers show
Series iio: ad9467: support interface tuning | expand

Message

Nuno Sa via B4 Relay April 19, 2024, 3:36 p.m. UTC
Hi Jonathan,

Here it goes one more set of new functionality for the backend
framework. This allows for one of the most important missing features of
the ad9467 driver. I hope the new interfaces to be fairly straight.
Though, there's one that's likely to catch your attention:

iio_backend_iodelay_set()

as you would expect (rightfully) some delay with actual units. The
reason why it does not have any units is because the IO delay thing is
mostly a calibration done at the backend level and the actually values
and timings (each tap corresponds to) is very HW specific. For example
the Xilinx/AMD zedboard has different specifications when compared to
zc706.

Given the above, I admit (:sweat smile:) I went the easier path and just added a
parameter with no meaningful unit (with proper docs). I'm definitely open
for ideas if this fells to hacky. One thing that I thought would be to
have any additional API that could be called during probe and get an
array of delays from the backend. Something like:

iio_backend_iodelays_get(back, const unsigned int **delays_ps, unsigned int *ndelays)

The backend should know what delays it supports. For the axi-adc IP we
do have registers to detect the fpga grade etc so we could return the
delays based on the HW we are running on. We would also need an addition
refclk as the actual delay each tap introduces depends on a refclk.

The series also has some "unrelated" patches for improvements and fixes. 

---
Nuno Sa (8):
      iio: backend: add API for interface tuning
      iio: adc: adi-axi-adc: only error out in major version mismatch
      dt-bindings: adc: axi-adc: add clocks property
      iio: adc: axi-adc: make sure AXI clock is enabled
      iio: adc: adi-axi-adc: remove regmap max register
      iio: adc: adi-axi-adc: support digital interface calibration
      iio: adc: ad9467: cache the sample rate
      iio: adc: ad9467: support digital interface calibration

 .../devicetree/bindings/iio/adc/adi,axi-adc.yaml   |   5 +
 drivers/iio/adc/ad9467.c                           | 340 ++++++++++++++++++---
 drivers/iio/adc/adi-axi-adc.c                      | 123 +++++++-
 drivers/iio/industrialio-backend.c                 |  86 ++++++
 include/linux/iio/backend.h                        |  57 +++-
 5 files changed, 561 insertions(+), 50 deletions(-)
---
base-commit: 62d3fb9dcc091ccdf25eb3b716e90e07e3ed861f
change-id: 20240419-ad9467-new-features-fbfbaa5edf06
--

Thanks!
- Nuno Sá

Comments

Jonathan Cameron April 20, 2024, 3 p.m. UTC | #1
On Fri, 19 Apr 2024 17:36:44 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> From: Nuno Sa <nuno.sa@analog.com>
> 
> This is in preparation for supporting interface tuning in one for the
> devices using the axi-adc backend. The new added interfaces are all
> needed for that calibration:

Would be good to have a little more info in this commit message on what
interface tuning involves?  I hope a tuning fork and a very good sense
of hearing...

> 
>  * iio_backend_test_pattern_set();
>  * iio_backend_chan_status();
>  * iio_backend_iodelay_set();
>  * iio_backend_data_sample_trigger().
> 
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>
Otherwise, trivial stuff inline.  Mostly looks fine. 

I appreciate you pointed out the taps thing was unit free and hence
possibly controversial.  Not much we can do about it and reality is
its a tweak factor - things like calibbias are unit free as well
for exactly the reason that they tend to be incredibly hardware dependent
and sometimes even instance of hardware dependent.

Jonathan

> ---
>  drivers/iio/industrialio-backend.c | 86 ++++++++++++++++++++++++++++++++++++++
>  include/linux/iio/backend.h        | 57 +++++++++++++++++++++----
>  2 files changed, 136 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c
> index 2fea2bbbe47fd..45eea3b725a35 100644
> --- a/drivers/iio/industrialio-backend.c
> +++ b/drivers/iio/industrialio-backend.c
> @@ -186,6 +186,92 @@ int iio_backend_data_format_set(struct iio_backend *back, unsigned int chan,
>  }
>  EXPORT_SYMBOL_NS_GPL(iio_backend_data_format_set, IIO_BACKEND);
>  
> +/**
> + * iio_backend_test_pattern_set - Configure a test pattern
> + * @back:	Backend device
> + * @chan:	Channel number
> + * @pattern:
> + *
> + * Configure a test pattern on the backend. This is typically used for
> + * calibrating the timings on the data digital interface.
> + *
> + * RETURNS:
> + * 0 on success, negative error number on failure.
> + */
> +int iio_backend_test_pattern_set(struct iio_backend *back,
> +				 unsigned int chan,
> +				 enum iio_backend_test_pattern pattern)
> +{
> +	if (pattern >= IIO_BACKEND_TEST_PATTERN_MAX)
> +		return -EINVAL;
> +
> +	return iio_backend_op_call(back, test_pattern_set, chan, pattern);
> +}
> +EXPORT_SYMBOL_NS_GPL(iio_backend_test_pattern_set, IIO_BACKEND);
> +
> +/**
> + * iio_backend_chan_status - Get the channel status
> + * @back:	Backend device
> + * @chan:	Channel number
> + * @status:	Channel status

Feels premature to define a structure for status when it simply returns if
there is an error so far.  Maybe simplify for now, and revisit once that
structure needs to be more complex?

> + *
> + * Get the current state of the backend channel. Typically used to check if
> + * there were any errors sending/receiving data.
> + *
> + * RETURNS:
> + * 0 on success, negative error number on failure.
> + */
> +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> +			    struct iio_backend_chan_status *status)
> +{
> +	return iio_backend_op_call(back, chan_status, chan, status);
> +}
> +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> +
> +/**
> + * iio_backend_iodelay_set - Set digital I/O delay
> + * @back:	Backend device
> + * @lane:	Lane number
> + * @tap:	Number of taps
> + *
> + * Controls delays on sending/receiving data. One usecase for this is to
> + * calibrate the data digital interface so we get the best results when
> + * transferring data. Note that @tap has no unit since the actual delay per tap
> + * is very backend specific. Hence, frontend devices typically should go through
> + * an array of @taps (the size of that array should typically match the size of
> + * calibration points on the frontend device) and call this API.
> + *
> + * RETURNS:
> + * 0 on success, negative error number on failure.
> + */
> +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> +			    unsigned int tap)

taps maybe given it's a number of them?
Is this an industry standard term - sounds like it probably is but my
google fu is failing.

> +{
> +	return iio_backend_op_call(back, iodelay_set, lane, tap);
> +}
> +EXPORT_SYMBOL_NS_GPL(iio_backend_iodelay_set, IIO_BACKEND);
> +
> +/**
> + * iio_backend_data_sample_trigger - Control when to sample data
> + * @back:	Backend device
> + * @trigger:	Data trigger
> + *
> + * Mostly useful for input backends. Configures the backend for when to sample
> + * data (eg: rising vs falling edge).

Feels like it might become a flags field at some point, but enum is fine for
trigger for now I guess.

> + *
> + * RETURNS:
> + * 0 on success, negative error number on failure.
> + */
> +int iio_backend_data_sample_trigger(struct iio_backend *back,
> +				    enum iio_backend_sample_trigger trigger)
> +{
> +	if (trigger >= IIO_BACKEND_SAMPLE_TRIGGER_MAX)
> +		return -EINVAL;
> +
> +	return iio_backend_op_call(back, data_sample_trigger, trigger);
> +}
> +EXPORT_SYMBOL_NS_GPL(iio_backend_data_sample_trigger, IIO_BACKEND);
> +
>  static void iio_backend_free_buffer(void *arg)
>  {
>  	struct iio_backend_buffer_pair *pair = arg;
> diff --git a/include/linux/iio/backend.h b/include/linux/iio/backend.h
> index a6d79381866ec..ad793fe0d78c2 100644
> --- a/include/linux/iio/backend.h
> +++ b/include/linux/iio/backend.h
> @@ -15,6 +15,19 @@ enum iio_backend_data_type {
>  	IIO_BACKEND_DATA_TYPE_MAX
>  };
>  
> +/* vendor specific from 32 */
> +enum iio_backend_test_pattern {
> +	/* modified prbs9 */
> +	IIO_BACKEND_ADI_PRBS_9A = 32,

Not knowing anything much about this, does it make sense to use an enum,
or should we face facts that we can't have a true generic interface
and just use a suitably sized int?

How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
or something like that.

> +	IIO_BACKEND_TEST_PATTERN_MAX
> +};
> +
> +enum iio_backend_sample_trigger {
> +	IIO_BACKEND_SAMPLE_TRIGGER_EDGE_FALLING,
> +	IIO_BACKEND_SAMPLE_TRIGGER_EDGE_RISING,
> +	IIO_BACKEND_SAMPLE_TRIGGER_MAX
> +};
> +
>  /**
>   * struct iio_backend_data_fmt - Backend data format
>   * @type:		Data type.
> @@ -28,15 +41,27 @@ struct iio_backend_data_fmt {
>  	bool enable;
>  };
>  
> +/**
> + * struct iio_backend_chan_status - Backend channel status
> + *  @errors:	Errors occurred when sending/receiving data.

error, it's only a bool so we know there was at least one.

> + */
> +struct iio_backend_chan_status {
> +	bool errors;
> +};
> +
>  /**
>   * struct iio_backend_ops - operations structure for an iio_backend
> - * @enable:		Enable backend.
> - * @disable:		Disable backend.
> - * @chan_enable:	Enable one channel.
> - * @chan_disable:	Disable one channel.
> - * @data_format_set:	Configure the data format for a specific channel.
> - * @request_buffer:	Request an IIO buffer.
> - * @free_buffer:	Free an IIO buffer.
> + * @enable:			Enable backend.

Hmm. I dislike aligning comments because of this sort of noise.
I guess I can cope without the ideal precursor patch making the padding
change, but I am moaning about it...

> + * @disable:			Disable backend.
> + * @chan_enable:		Enable one channel.
> + * @chan_disable:		Disable one channel.
> + * @data_format_set:		Configure the data format for a specific channel.
> + * @test_pattern_set:		Configure a test pattern.
> + * @chan_status:		Get the channel status.
> + * @iodelay_set:		Set digital I/O delay.
> + * @data_sample_trigger:	Control when to sample data.
> + * @request_buffer:		Request an IIO buffer.
> + * @free_buffer:		Free an IIO buffer.
>   **/
>  struct iio_backend_ops {
>  	int (*enable)(struct iio_backend *back);
> @@ -45,6 +70,15 @@ struct iio_backend_ops {
>  	int (*chan_disable)(struct iio_backend *back, unsigned int chan);
>  	int (*data_format_set)(struct iio_backend *back, unsigned int chan,
>  			       const struct iio_backend_data_fmt *data);
> +	int (*test_pattern_set)(struct iio_backend *back,
> +				unsigned int chan,
> +				enum iio_backend_test_pattern pattern);
> +	int (*chan_status)(struct iio_backend *back, unsigned int chan,
> +			   struct iio_backend_chan_status *status);
> +	int (*iodelay_set)(struct iio_backend *back, unsigned int chan,
> +			   unsigned int tap);
> +	int (*data_sample_trigger)(struct iio_backend *back,
> +				   enum iio_backend_sample_trigger trigger);
>  	struct iio_buffer *(*request_buffer)(struct iio_backend *back,
>  					     staptruct iio_dev *indio_dev);
>  	void (*free_buffer)(struct iio_backend *back,
> @@ -56,6 +90,15 @@ int iio_backend_chan_disable(struct iio_backend *back, unsigned int chan);
>  int devm_iio_backend_enable(struct device *dev, struct iio_backend *back);
>  int iio_backend_data_format_set(struct iio_backend *back, unsigned int chan,
>  				const struct iio_backend_data_fmt *data);
> +int iio_backend_test_pattern_set(struct iio_backend *back,
> +				 unsigned int chan,
> +				 enum iio_backend_test_pattern pattern);
> +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> +			    struct iio_backend_chan_status *status);
> +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> +			    unsigned int tap);
> +int iio_backend_data_sample_trigger(struct iio_backend *back,
> +				    enum iio_backend_sample_trigger trigger);
>  int devm_iio_backend_request_buffer(struct device *dev,
>  				    struct iio_backend *back,
>  				    struct iio_dev *indio_dev);
>
Jonathan Cameron April 20, 2024, 3:02 p.m. UTC | #2
On Fri, 19 Apr 2024 17:36:45 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> From: Nuno Sa <nuno.sa@analog.com>
> 
> The IP core only has breaking changes when there major version changes.
> Hence, only match the major number. This is also in line with the other
> core ADI has upstream. The current check for erroring out
> 'expected_version > current_version"' is then wrong as we could just
> increase the core major with breaking changes and that would go
> unnoticed.
> 
> Fixes: ef04070692a2 ("iio: adc: adi-axi-adc: add support for AXI ADC IP core")
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>
You did point out in the cover letter that there were some unrelated
changes in this series.   Would have been better to just pull the out as
a precursor and hence not have to mention it!

Anyhow, I'll pick at least this one up now but I'm not rushing it in as
doesn't feel urgent.  Applied to the togreg branch of iio.git and pushed out as
testing.

Thanks,

Jonathan

> ---
>  drivers/iio/adc/adi-axi-adc.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
> index 4156639b3c8bd..a543b91124b07 100644
> --- a/drivers/iio/adc/adi-axi-adc.c
> +++ b/drivers/iio/adc/adi-axi-adc.c
> @@ -207,9 +207,9 @@ static int adi_axi_adc_probe(struct platform_device *pdev)
>  	if (ret)
>  		return ret;
>  
> -	if (*expected_ver > ver) {
> +	if (ADI_AXI_PCORE_VER_MAJOR(ver) != ADI_AXI_PCORE_VER_MAJOR(*expected_ver)) {
>  		dev_err(&pdev->dev,
> -			"IP core version is too old. Expected %d.%.2d.%c, Reported %d.%.2d.%c\n",
> +			"Major version mismatch. Expected %d.%.2d.%c, Reported %d.%.2d.%c\n",
>  			ADI_AXI_PCORE_VER_MAJOR(*expected_ver),
>  			ADI_AXI_PCORE_VER_MINOR(*expected_ver),
>  			ADI_AXI_PCORE_VER_PATCH(*expected_ver),
>
Jonathan Cameron April 20, 2024, 3:04 p.m. UTC | #3
On Fri, 19 Apr 2024 17:36:47 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> From: Nuno Sa <nuno.sa@analog.com>
> 
> We can only access the IP core registers if the bus clock is enabled. As
> such we need to get and enable it and not rely on anyone else to do it.
> 
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>
Likewise - smells like a fix, but no fixes tag?

> ---
>  drivers/iio/adc/adi-axi-adc.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
> index a543b91124b07..e3b2158829416 100644
> --- a/drivers/iio/adc/adi-axi-adc.c
> +++ b/drivers/iio/adc/adi-axi-adc.c
> @@ -175,6 +175,7 @@ static int adi_axi_adc_probe(struct platform_device *pdev)
>  	struct adi_axi_adc_state *st;
>  	void __iomem *base;
>  	unsigned int ver;
> +	struct clk *clk;
>  	int ret;
>  
>  	st = devm_kzalloc(&pdev->dev, sizeof(*st), GFP_KERNEL);
> @@ -195,6 +196,10 @@ static int adi_axi_adc_probe(struct platform_device *pdev)
>  	if (!expected_ver)
>  		return -ENODEV;
>  
> +	clk = devm_clk_get_enabled(&pdev->dev, NULL);
> +	if (IS_ERR(clk))
> +		return PTR_ERR(clk);
> +
>  	/*
>  	 * Force disable the core. Up to the frontend to enable us. And we can
>  	 * still read/write registers...
>
Jonathan Cameron April 20, 2024, 3:13 p.m. UTC | #4
On Fri, 19 Apr 2024 17:36:49 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> From: Nuno Sa <nuno.sa@analog.com>
> 
> Implement the new IIO backend APIs for calibrating the data
> digital interfaces.
> 
> While at it, removed the tabs in 'struct adi_axi_adc_state' and used
> spaces for the members.

Ideally a precursor patch, but meh, it's 2 lines so I'll just moan about
it and move on.

A few minor things inline.


> 
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> ---
>  drivers/iio/adc/adi-axi-adc.c | 113 +++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 111 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
> index b312369b7366..d58fa05499c4 100644
> --- a/drivers/iio/adc/adi-axi-adc.c
> +++ b/drivers/iio/adc/adi-axi-adc.c
> @@ -7,11 +7,13 @@
>   */
>  
>  #include <linux/bitfield.h>
> +#include <linux/cleanup.h>
>  #include <linux/clk.h>
>  #include <linux/err.h>
>  #include <linux/io.h>
>  #include <linux/delay.h>
>  #include <linux/module.h>
> +#include <linux/mutex.h>
>  #include <linux/of.h>
>  #include <linux/platform_device.h>
>  #include <linux/property.h>
> @@ -37,6 +39,9 @@
>  #define   ADI_AXI_REG_RSTN_MMCM_RSTN		BIT(1)
>  #define   ADI_AXI_REG_RSTN_RSTN			BIT(0)
>  
> +#define ADI_AXI_ADC_REG_CTRL			0x0044
> +#define    ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK	BIT(1)
> +
>  /* ADC Channel controls */
>  
>  #define ADI_AXI_REG_CHAN_CTRL(c)		(0x0400 + (c) * 0x40)
> @@ -51,14 +56,28 @@
>  #define   ADI_AXI_REG_CHAN_CTRL_PN_TYPE_OWR	BIT(1)
>  #define   ADI_AXI_REG_CHAN_CTRL_ENABLE		BIT(0)
>  
> +#define ADI_AXI_ADC_REG_CHAN_STATUS(c)		(0x0404 + (c) * 0x40)
> +#define   ADI_AXI_ADC_CHAN_STAT_PN_MASK		GENMASK(2, 1)
> +
> +#define ADI_AXI_ADC_REG_CHAN_CTRL_3(c)		(0x0418 + (c) * 0x40)
> +#define   ADI_AXI_ADC_CHAN_PN_SEL_MASK		GENMASK(19, 16)
> +
> +/* IO Delays */
> +#define ADI_AXI_ADC_REG_DELAY(l)		(0x0800 + (l) * 0x4)
> +#define   AXI_ADC_DELAY_CTRL_MASK		GENMASK(4, 0)
> +
> +#define ADI_AXI_ADC_MAX_IO_NUM_LANES		15
> +
>  #define ADI_AXI_REG_CHAN_CTRL_DEFAULTS		\
>  	(ADI_AXI_REG_CHAN_CTRL_FMT_SIGNEXT |	\
>  	 ADI_AXI_REG_CHAN_CTRL_FMT_EN |		\
>  	 ADI_AXI_REG_CHAN_CTRL_ENABLE)
>  
>  struct adi_axi_adc_state {
> -	struct regmap				*regmap;
> -	struct device				*dev;
> +	struct regmap *regmap;
> +	struct device *dev;
> +	/* lock to protect multiple accesses to the device registers */

Why?  The locking in regmap protects register accesses in general I believe.
I guess this is to prevent accesses during that error detection routine?
Needs more detail in the comment.

> +	struct mutex lock;
>  };
>  
>  static int axi_adc_enable(struct iio_backend *back)
> @@ -104,6 +123,92 @@ static int axi_adc_data_format_set(struct iio_backend *back, unsigned int chan,
>  				  ADI_AXI_REG_CHAN_CTRL_FMT_MASK, val);
>  }
>  
> +static int axi_adc_data_sample_trigger(struct iio_backend *back,
> +				       enum iio_backend_sample_trigger trigger)
> +{
> +	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
> +
> +	if (trigger == IIO_BACKEND_SAMPLE_TRIGGER_EDGE_RISING)

It's an enum, so can't have more than one. Hence switch statement is probably
more extensible and natural to read.

> +		return regmap_clear_bits(st->regmap, ADI_AXI_ADC_REG_CTRL,
> +					 ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK);
> +	if (trigger == IIO_BACKEND_SAMPLE_TRIGGER_EDGE_FALLING)
> +		return regmap_set_bits(st->regmap, ADI_AXI_ADC_REG_CTRL,
> +				       ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK);
> +
> +	return -EINVAL;
> +}
> +


> +static int axi_adc_chan_status(struct iio_backend *back, unsigned int chan,
> +			       struct iio_backend_chan_status *status)
> +{
> +	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
> +	int ret;
> +	u32 val;
> +
> +	guard(mutex)(&st->lock);
> +	/* reset test bits by setting them */
> +	ret = regmap_write(st->regmap, ADI_AXI_ADC_REG_CHAN_STATUS(chan),
> +			   ADI_AXI_ADC_CHAN_STAT_PN_MASK);
> +	if (ret)
> +		return ret;
> +
> +	fsleep(1000);
Why this particular length sleep?  Is this a case of let it sit a while an dsee
if an error shows up?  If so a comment on that would be good.
> +
> +	ret = regmap_read(st->regmap, ADI_AXI_ADC_REG_CHAN_STATUS(chan), &val);
> +	if (ret)
> +		return ret;
> +
> +	if (ADI_AXI_ADC_CHAN_STAT_PN_MASK & val)
> +		status->errors = true;
> +
> +	return 0;
> +}
> +
>  static int axi_adc_chan_enable(struct iio_backend *back, unsigned int chan)
>  {
>  	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
> @@ -166,6 +271,10 @@ static const struct iio_backend_ops adi_axi_adc_generic = {
>  	.chan_disable = axi_adc_chan_disable,
>  	.request_buffer = axi_adc_request_buffer,
>  	.free_buffer = axi_adc_free_buffer,
> +	.data_sample_trigger = axi_adc_data_sample_trigger,
> +	.iodelay_set = axi_adc_iodelays_set,
> +	.test_pattern_set = axi_adc_test_pattern_set,
> +	.chan_status = axi_adc_chan_status,
>  };
>  
>  static int adi_axi_adc_probe(struct platform_device *pdev)
>
Jonathan Cameron April 20, 2024, 3:19 p.m. UTC | #5
On Fri, 19 Apr 2024 17:36:50 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> From: Nuno Sa <nuno.sa@analog.com>
> 
> Since we allow to change the sampling frequency and do it with
> clk_round_rate(), we can cache it and use on the read_raw() interface.
> This will also be useful in a following patch supporting interface
> calibration.
> 
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>

The clk subsystem caches the clock rate in most cases anyway, so
I'm not sure why we need this.  Or it the point that you are going
to temporarily change it in the next patch?

Patch looks fine, but I think a clearer requirements statement is
needed.

Jonathan


> ---
>  drivers/iio/adc/ad9467.c | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
> index 7475ec2a56c72..7db87ccc1ea4b 100644
> --- a/drivers/iio/adc/ad9467.c
> +++ b/drivers/iio/adc/ad9467.c
> @@ -117,6 +117,7 @@ struct ad9467_state {
>  	struct iio_backend		*back;
>  	struct spi_device		*spi;
>  	struct clk			*clk;
> +	unsigned long			sample_rate;
>  	unsigned int			output_mode;
>  	unsigned int                    (*scales)[2];
>  
> @@ -331,7 +332,7 @@ static int ad9467_read_raw(struct iio_dev *indio_dev,
>  	case IIO_CHAN_INFO_SCALE:
>  		return ad9467_get_scale(st, val, val2);
>  	case IIO_CHAN_INFO_SAMP_FREQ:
> -		*val = clk_get_rate(st->clk);
> +		*val = st->sample_rate;
>  
>  		return IIO_VAL_INT;
>  	default:
> @@ -346,6 +347,7 @@ static int ad9467_write_raw(struct iio_dev *indio_dev,
>  	struct ad9467_state *st = iio_priv(indio_dev);
>  	const struct ad9467_chip_info *info = st->info;
>  	long r_clk;
> +	int ret;
>  
>  	switch (mask) {
>  	case IIO_CHAN_INFO_SCALE:
> @@ -358,7 +360,12 @@ static int ad9467_write_raw(struct iio_dev *indio_dev,
>  			return -EINVAL;
>  		}
>  
> -		return clk_set_rate(st->clk, r_clk);
> +		ret = clk_set_rate(st->clk, r_clk);
> +		if (ret)
> +			return ret;
> +
> +		st->sample_rate = r_clk;
> +		return 0;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -543,6 +550,8 @@ static int ad9467_probe(struct spi_device *spi)
>  	if (IS_ERR(st->clk))
>  		return PTR_ERR(st->clk);
>  
> +	st->sample_rate = clk_get_rate(st->clk);
> +
>  	st->pwrdown_gpio = devm_gpiod_get_optional(&spi->dev, "powerdown",
>  						   GPIOD_OUT_LOW);
>  	if (IS_ERR(st->pwrdown_gpio))
>
Jonathan Cameron April 20, 2024, 3:34 p.m. UTC | #6
On Fri, 19 Apr 2024 17:36:51 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> From: Nuno Sa <nuno.sa@analog.com>
> 
> To make sure that we have the best timings on the serial data interface
> we should calibrate it. This means going through the device supported
> values and see for which ones we get a successful result. To do that, we
> use a prbs test pattern both in the IIO backend and in the frontend
> devices. Then for each of the test points we see if there are any
> errors. Note that the backend is responsible to look for those errors.
> 
> As calibrating the interface also requires that the data format is disabled
> (the one thing being done in ad9467_setup()), ad9467_setup() was removed
> and configuring the data fomat is now part of the calibration process.
> 
> Signed-off-by: Nuno Sa <nuno.sa@analog.com>
Trivial comments inline.

Jonathan

> ---
>  drivers/iio/adc/ad9467.c | 337 +++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 296 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
> index 7db87ccc1ea4..44552dd6f4c6 100644
> --- a/drivers/iio/adc/ad9467.c
> +++ b/drivers/iio/adc/ad9467.c
> @@ -4,6 +4,9 @@
>   *
>   * Copyright 2012-2020 Analog Devices Inc.
>   */
> +
> +#include <linux/bitmap.h>
> +#include <linux/bitops.h>
>  #include <linux/cleanup.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -100,6 +103,8 @@
>  #define AD9467_DEF_OUTPUT_MODE		0x08
>  #define AD9467_REG_VREF_MASK		0x0F
>  
> +#define AD9647_MAX_TEST_POINTS		32
> +
>  struct ad9467_chip_info {
>  	const char		*name;
>  	unsigned int		id;
> @@ -110,6 +115,8 @@ struct ad9467_chip_info {
>  	unsigned long		max_rate;
>  	unsigned int		default_output_mode;
>  	unsigned int		vref_mask;
> +	unsigned int		num_lanes;
> +	bool			has_dco;

What is dco?  Perhaps a comment, or expand the naming somewhat.

>  };

>  
> +static void ad9467_dump_table(const unsigned long *err_map, unsigned int size,
> +			      bool invert)
> +{
> +#ifdef DEBUG
> +	unsigned int bit;
> +
> +	pr_debug("Dump calibration table:\n");

If it's useful, poke it in debugfs, otherwise, drop this code.

> +	for (bit = 0; bit < size; bit++) {
> +		if (bit == size / 2) {
> +			if (!invert)
> +				break;
> +			pr_cont("\n");
> +		}
> +
> +		pr_cont("%c", test_bit(bit, err_map) ? 'x' : 'o');
> +	}
> +#endif
> +}
> +

> +static int ad9467_calibrate_apply(const struct ad9467_state *st,
> +				  unsigned int val)
> +{
> +	unsigned int lane;
> +	int ret;
> +
> +	if (st->info->has_dco) {
> +		int ret;
Shadowing ret above.

> +
> +		ret = ad9467_spi_write(st->spi, AN877_ADC_REG_OUTPUT_DELAY,
> +				       val);
> +		if (ret)
> +			return ret;
> +
> +		return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
> +					AN877_ADC_TRANSFER_SYNC);
> +	}
> +
> +	for (lane = 0; lane < st->info->num_lanes; lane++) {
> +		ret = iio_backend_iodelay_set(st->back, lane, val);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

> +
> +static int ad9467_calibrate(const struct ad9467_state *st)

Some docs on the sequence or a reference would be good.

> +{
> +	DECLARE_BITMAP(err_map, AD9647_MAX_TEST_POINTS * 2);
> +	unsigned int point, val, inv_val, cnt, inv_cnt = 0;
> +	/*
> +	 * Half of the bitmap is for the inverted signal. The number of test
> +	 * points is the same though...
> +	 */
> +	unsigned int test_points = AD9647_MAX_TEST_POINTS;
> +	unsigned long sample_rate = clk_get_rate(st->clk);
> +	struct device *dev = &st->spi->dev;
> +	bool invert = false, stat;
> +	int ret;
> +
> +	ret = ad9647_calibrate_prepare(st);
> +	if (ret)
> +		return ret;
> +retune:
> +	ret = ad9647_calibrate_polarity_set(st, invert);
> +	if (ret)
> +		return ret;
> +
> +	for (point = 0; point < test_points; point++) {
> +		ret = ad9467_calibrate_apply(st, point);
> +		if (ret)
> +			return ret;
> +
> +		ret = ad9467_calibrate_status_check(st, &stat);
> +		if (ret < 0)
> +			return ret;
> +
> +		__assign_bit(point + invert * test_points, err_map, stat);
> +	}
> +
> +	if (!invert) {
> +		cnt = ad9467_find_optimal_point(err_map, 0, test_points, &val);
> +		/*
> +		 * We're happy if we find, at least, three good test points in
> +		 * a row.
> +		 */
> +		if (cnt < 3) {
> +			invert = true;
> +			goto retune;
> +		}
> +	} else {
> +		inv_cnt = ad9467_find_optimal_point(err_map, test_points,
> +						    test_points, &inv_val);
> +		if (!inv_cnt && !cnt)
> +			return -EIO;
> +	}
> +
> +	ad9467_dump_table(err_map, BITS_PER_TYPE(err_map), invert);
> +
> +	if (inv_cnt < cnt) {
> +		ret = ad9647_calibrate_polarity_set(st, false);
> +		if (ret)
> +			return ret;
> +	} else {
> +		/*
> +		 * polarity inverted is the last test to run. Hence, there's no
> +		 * need to re-do any configuration. We just need to "normalize"
> +		 * the selected value.
> +		 */
> +		val = inv_val - test_points;
> +	}
> +
> +	if (st->info->has_dco)
> +		dev_dbg(dev, "%sDCO 0x%X CLK %lu Hz\n", inv_cnt >= cnt ? "INVERT " : "",
> +			val, sample_rate);
> +	else
> +		dev_dbg(dev, "%sIDELAY 0x%x\n", inv_cnt >= cnt ? "INVERT " : "",
> +			val);
> +
> +	ret = ad9467_calibrate_apply(st, val);
> +	if (ret)
> +		return ret;
> +
> +	/* finally apply the optimal value */
> +	return ad9647_calibrate_stop(st);
> +}
> +
Jonathan Cameron April 20, 2024, 3:39 p.m. UTC | #7
On Fri, 19 Apr 2024 17:36:43 +0200
Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:

> Hi Jonathan,
Hi Nuno,

Friday special :)

> 
> Here it goes one more set of new functionality for the backend
> framework. This allows for one of the most important missing features of
> the ad9467 driver. I hope the new interfaces to be fairly straight.
> Though, there's one that's likely to catch your attention:
> 
> iio_backend_iodelay_set()
> 
> as you would expect (rightfully) some delay with actual units. The
> reason why it does not have any units is because the IO delay thing is
> mostly a calibration done at the backend level and the actually values
> and timings (each tap corresponds to) is very HW specific. For example
> the Xilinx/AMD zedboard has different specifications when compared to
> zc706.
> 
> Given the above, I admit (:sweat smile:) I went the easier path and just added a
> parameter with no meaningful unit (with proper docs). I'm definitely open
> for ideas if this fells to hacky. One thing that I thought would be to
> have any additional API that could be called during probe and get an
> array of delays from the backend. Something like:
> 
> iio_backend_iodelays_get(back, const unsigned int **delays_ps, unsigned int *ndelays)
> 
> The backend should know what delays it supports. For the axi-adc IP we
> do have registers to detect the fpga grade etc so we could return the
> delays based on the HW we are running on. We would also need an addition
> refclk as the actual delay each tap introduces depends on a refclk.

From a userspace point of view do we care about the real values? (in units we understand)
Does the front end driver algorithm ever care about what they actually mean either?

Feels like all these are is a sequence of magic flags that we try, the only
thing that assigns them any absolute meaning is that you assume they
are monotonic in some sense, so picking the middle value of a set in a row that
works makes sense.

So I'm not really that bothered about the lack of units.
Whether this interface generalizes well to other device will be interesting
to see, but if it doesn't and we come up with something better a later stage,
this is all in kernel interface, so we can change it anwyay at that point.

> 
> The series also has some "unrelated" patches for improvements and fixes. 

Hmm.  Next time pull those out as a separate set and just mention
it as a dependency.

> 
> ---
> Nuno Sa (8):
>       iio: backend: add API for interface tuning
>       iio: adc: adi-axi-adc: only error out in major version mismatch
>       dt-bindings: adc: axi-adc: add clocks property
>       iio: adc: axi-adc: make sure AXI clock is enabled
>       iio: adc: adi-axi-adc: remove regmap max register
>       iio: adc: adi-axi-adc: support digital interface calibration
>       iio: adc: ad9467: cache the sample rate
>       iio: adc: ad9467: support digital interface calibration
> 
>  .../devicetree/bindings/iio/adc/adi,axi-adc.yaml   |   5 +
>  drivers/iio/adc/ad9467.c                           | 340 ++++++++++++++++++---
>  drivers/iio/adc/adi-axi-adc.c                      | 123 +++++++-
>  drivers/iio/industrialio-backend.c                 |  86 ++++++
>  include/linux/iio/backend.h                        |  57 +++-
>  5 files changed, 561 insertions(+), 50 deletions(-)
> ---
> base-commit: 62d3fb9dcc091ccdf25eb3b716e90e07e3ed861f
> change-id: 20240419-ad9467-new-features-fbfbaa5edf06
> --
> 
> Thanks!
> - Nuno Sá
> 
>
Nuno Sá April 22, 2024, 3:40 p.m. UTC | #8
On Sat, 2024-04-20 at 16:00 +0100, Jonathan Cameron wrote:
> On Fri, 19 Apr 2024 17:36:44 +0200
> Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> 
> > From: Nuno Sa <nuno.sa@analog.com>
> > 
> > This is in preparation for supporting interface tuning in one for the
> > devices using the axi-adc backend. The new added interfaces are all
> > needed for that calibration:
> 
> Would be good to have a little more info in this commit message on what
> interface tuning involves?  I hope a tuning fork and a very good sense

will do

> of hearing...
> 
> > 
> >  * iio_backend_test_pattern_set();
> >  * iio_backend_chan_status();
> >  * iio_backend_iodelay_set();
> >  * iio_backend_data_sample_trigger().
> > 
> > Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> Otherwise, trivial stuff inline.  Mostly looks fine. 
> 
> I appreciate you pointed out the taps thing was unit free and hence
> possibly controversial.  Not much we can do about it and reality is
> its a tweak factor - things like calibbias are unit free as well
> for exactly the reason that they tend to be incredibly hardware dependent
> and sometimes even instance of hardware dependent.
> 

Agreed. We could do the iodelays_get() dance but I don't think that would be that
beneficial...

> Jonathan
> 
> > ---
> >  drivers/iio/industrialio-backend.c | 86 ++++++++++++++++++++++++++++++++++++++
> >  include/linux/iio/backend.h        | 57 +++++++++++++++++++++----
> >  2 files changed, 136 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-
> > backend.c
> > index 2fea2bbbe47fd..45eea3b725a35 100644
> > --- a/drivers/iio/industrialio-backend.c
> > +++ b/drivers/iio/industrialio-backend.c
> > @@ -186,6 +186,92 @@ int iio_backend_data_format_set(struct iio_backend *back,
> > unsigned int chan,
> >  }
> >  EXPORT_SYMBOL_NS_GPL(iio_backend_data_format_set, IIO_BACKEND);
> >  
> > +/**
> > + * iio_backend_test_pattern_set - Configure a test pattern
> > + * @back:	Backend device
> > + * @chan:	Channel number
> > + * @pattern:
> > + *
> > + * Configure a test pattern on the backend. This is typically used for
> > + * calibrating the timings on the data digital interface.
> > + *
> > + * RETURNS:
> > + * 0 on success, negative error number on failure.
> > + */
> > +int iio_backend_test_pattern_set(struct iio_backend *back,
> > +				 unsigned int chan,
> > +				 enum iio_backend_test_pattern pattern)
> > +{
> > +	if (pattern >= IIO_BACKEND_TEST_PATTERN_MAX)
> > +		return -EINVAL;
> > +
> > +	return iio_backend_op_call(back, test_pattern_set, chan, pattern);
> > +}
> > +EXPORT_SYMBOL_NS_GPL(iio_backend_test_pattern_set, IIO_BACKEND);
> > +
> > +/**
> > + * iio_backend_chan_status - Get the channel status
> > + * @back:	Backend device
> > + * @chan:	Channel number
> > + * @status:	Channel status
> 
> Feels premature to define a structure for status when it simply returns if
> there is an error so far.  Maybe simplify for now, and revisit once that
> structure needs to be more complex?

Can do that. No strong feelings :). I'll strip out the boolean and drop the struct.

> 
> > + *
> > + * Get the current state of the backend channel. Typically used to check if
> > + * there were any errors sending/receiving data.
> > + *
> > + * RETURNS:
> > + * 0 on success, negative error number on failure.
> > + */
> > +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> > +			    struct iio_backend_chan_status *status)
> > +{
> > +	return iio_backend_op_call(back, chan_status, chan, status);
> > +}
> > +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> > +
> > +/**
> > + * iio_backend_iodelay_set - Set digital I/O delay
> > + * @back:	Backend device
> > + * @lane:	Lane number
> > + * @tap:	Number of taps
> > + *
> > + * Controls delays on sending/receiving data. One usecase for this is to
> > + * calibrate the data digital interface so we get the best results when
> > + * transferring data. Note that @tap has no unit since the actual delay per tap
> > + * is very backend specific. Hence, frontend devices typically should go through
> > + * an array of @taps (the size of that array should typically match the size of
> > + * calibration points on the frontend device) and call this API.
> > + *
> > + * RETURNS:
> > + * 0 on success, negative error number on failure.
> > + */
> > +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> > +			    unsigned int tap)
> 
> taps maybe given it's a number of them?

yeps...

> Is this an industry standard term - sounds like it probably is but my
> google fu is failing.
> 

Not really (I think). It's very AMD/Xilinx specific. If you google for Xilinx IDELAY
control you may found something. I could not find a good name (originally I just had
'delay' but without a proper unit it felt weird), so I admit I used the one it made
more sense for my specific usecase. Open to suggestions though :).

> > +{
> > +	return iio_backend_op_call(back, iodelay_set, lane, tap);
> > +}
> > +EXPORT_SYMBOL_NS_GPL(iio_backend_iodelay_set, IIO_BACKEND);
> > +
> > +/**
> > + * iio_backend_data_sample_trigger - Control when to sample data
> > + * @back:	Backend device
> > + * @trigger:	Data trigger
> > + *
> > + * Mostly useful for input backends. Configures the backend for when to sample
> > + * data (eg: rising vs falling edge).
> 
> Feels like it might become a flags field at some point, but enum is fine for
> trigger for now I guess.
> 
> > + *
> > + * RETURNS:
> > + * 0 on success, negative error number on failure.
> > + */
> > +int iio_backend_data_sample_trigger(struct iio_backend *back,
> > +				    enum iio_backend_sample_trigger trigger)
> > +{
> > +	if (trigger >= IIO_BACKEND_SAMPLE_TRIGGER_MAX)
> > +		return -EINVAL;
> > +
> > +	return iio_backend_op_call(back, data_sample_trigger, trigger);
> > +}
> > +EXPORT_SYMBOL_NS_GPL(iio_backend_data_sample_trigger, IIO_BACKEND);
> > +
> >  static void iio_backend_free_buffer(void *arg)
> >  {
> >  	struct iio_backend_buffer_pair *pair = arg;
> > diff --git a/include/linux/iio/backend.h b/include/linux/iio/backend.h
> > index a6d79381866ec..ad793fe0d78c2 100644
> > --- a/include/linux/iio/backend.h
> > +++ b/include/linux/iio/backend.h
> > @@ -15,6 +15,19 @@ enum iio_backend_data_type {
> >  	IIO_BACKEND_DATA_TYPE_MAX
> >  };
> >  
> > +/* vendor specific from 32 */
> > +enum iio_backend_test_pattern {
> > +	/* modified prbs9 */
> > +	IIO_BACKEND_ADI_PRBS_9A = 32,
> 
> Not knowing anything much about this, does it make sense to use an enum,
> or should we face facts that we can't have a true generic interface
> and just use a suitably sized int?
> 

I'm also not a fan of the above but we do have generic/standard patterns in this core
(and that could be used by others):

- 0x0: pn9a (device specific, modified pn9)
- 0x1: pn23a (device specific, modified pn23)
- 0x4: pn7 (standard O.150)
- 0x5: pn15 (standard O.150)
- 0x6: pn23 (standard O.150)
- 0x7: pn31 (standard O.150)
- 0x9: pnX (device specific, e.g. ad9361)
- 0x0A: Nibble ramp (Device specific e.g. adrv9001)
- 0x0B: 16 bit ramp 

Lucky enough the user we have for this is only using a custom/modified pattern. my
issue with the int is that how do frontends know what value do they need to pass into
the API? It would really be very backend specific. I know we do expect frontends to
have some assumed knowledge on the backend they're connected too but I would like to
avoid making those assumptions bigger than they need to be.

My expectation with the enum is that we can have some "contract" between backends and
frontends on the pattern to use. I guess we could give it a try (unless you have some
other idea) and if it starts going out of control, I can assume defeat and change it
to an int.

Or, is the idea to just have the int parameter and some plain defines in the backend
header?

> How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
> or something like that.
> 

Since this is on the input direction (and for our particular core), we don't have to
unset it. When you choose a test pattern, it just tells the core to match for a
specific signal/pattern. So when you do start getting "real" data, we may still have
those status bits saying there are "errors" but in reality we don't care. We just
care during the tuning/calibration procedure as we configure matching patters between
frontend and backend...

OTOH for the axi-dac, for example, we do need to unset the test pattern. And we do
that by (re)configuring the internal CW tone or the external data source (typically
some DMA core).


> > +	IIO_BACKEND_TEST_PATTERN_MAX
> > +};
> > +
> > +enum iio_backend_sample_trigger {
> > +	IIO_BACKEND_SAMPLE_TRIGGER_EDGE_FALLING,
> > +	IIO_BACKEND_SAMPLE_TRIGGER_EDGE_RISING,
> > +	IIO_BACKEND_SAMPLE_TRIGGER_MAX
> > +};
> > +
> >  /**
> >   * struct iio_backend_data_fmt - Backend data format
> >   * @type:		Data type.
> > @@ -28,15 +41,27 @@ struct iio_backend_data_fmt {
> >  	bool enable;
> >  };
> >  
> > +/**
> > + * struct iio_backend_chan_status - Backend channel status
> > + *  @errors:	Errors occurred when sending/receiving data.
> 
> error, it's only a bool so we know there was at least one.

ack

> 
> > + */
> > +struct iio_backend_chan_status {
> > +	bool errors;
> > +};
> > +
> >  /**
> >   * struct iio_backend_ops - operations structure for an iio_backend
> > - * @enable:		Enable backend.
> > - * @disable:		Disable backend.
> > - * @chan_enable:	Enable one channel.
> > - * @chan_disable:	Disable one channel.
> > - * @data_format_set:	Configure the data format for a specific channel.
> > - * @request_buffer:	Request an IIO buffer.
> > - * @free_buffer:	Free an IIO buffer.
> > + * @enable:			Enable backend.
> 
> Hmm. I dislike aligning comments because of this sort of noise.
> I guess I can cope without the ideal precursor patch making the padding
> change, but I am moaning about it...

Yeah, I also dislike it in struct members but for some reason I went like this in
here... I'll send a precursor patch in v2.


- Nuno Sá
Nuno Sá April 22, 2024, 3:46 p.m. UTC | #9
On Sat, 2024-04-20 at 16:19 +0100, Jonathan Cameron wrote:
> On Fri, 19 Apr 2024 17:36:50 +0200
> Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> 
> > From: Nuno Sa <nuno.sa@analog.com>
> > 
> > Since we allow to change the sampling frequency and do it with
> > clk_round_rate(), we can cache it and use on the read_raw() interface.
> > This will also be useful in a following patch supporting interface
> > calibration.
> > 
> > Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> 
> The clk subsystem caches the clock rate in most cases anyway, so
> I'm not sure why we need this.  Or it the point that you are going
> to temporarily change it in the next patch?
> 

The idea is that in the next patch I want to bail out early if we're not changing the
rate (after clk_round_rate()) because I also don't want to re-run tuning in that
case. Since the rate is not guaranteed to be cached (even though I think most clock
providers don't set the NO_CACHE flag), I went this way. Anyways, this is minor so I
can stick to clk_get_rate() if you prefer.

> Patch looks fine, but I think a clearer requirements statement is
> needed.
> 
> Jonathan
> 
> 
> > ---
> >  drivers/iio/adc/ad9467.c | 13 +++++++++++--
> >  1 file changed, 11 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
> > index 7475ec2a56c72..7db87ccc1ea4b 100644
> > --- a/drivers/iio/adc/ad9467.c
> > +++ b/drivers/iio/adc/ad9467.c
> > @@ -117,6 +117,7 @@ struct ad9467_state {
> >  	struct iio_backend		*back;
> >  	struct spi_device		*spi;
> >  	struct clk			*clk;
> > +	unsigned long			sample_rate;
> >  	unsigned int			output_mode;
> >  	unsigned int                    (*scales)[2];
> >  
> > @@ -331,7 +332,7 @@ static int ad9467_read_raw(struct iio_dev *indio_dev,
> >  	case IIO_CHAN_INFO_SCALE:
> >  		return ad9467_get_scale(st, val, val2);
> >  	case IIO_CHAN_INFO_SAMP_FREQ:
> > -		*val = clk_get_rate(st->clk);
> > +		*val = st->sample_rate;
> >  
> >  		return IIO_VAL_INT;
> >  	default:
> > @@ -346,6 +347,7 @@ static int ad9467_write_raw(struct iio_dev *indio_dev,
> >  	struct ad9467_state *st = iio_priv(indio_dev);
> >  	const struct ad9467_chip_info *info = st->info;
> >  	long r_clk;
> > +	int ret;
> >  
> >  	switch (mask) {
> >  	case IIO_CHAN_INFO_SCALE:
> > @@ -358,7 +360,12 @@ static int ad9467_write_raw(struct iio_dev *indio_dev,
> >  			return -EINVAL;
> >  		}
> >  
> > -		return clk_set_rate(st->clk, r_clk);
> > +		ret = clk_set_rate(st->clk, r_clk);
> > +		if (ret)
> > +			return ret;
> > +
> > +		st->sample_rate = r_clk;
> > +		return 0;
> >  	default:
> >  		return -EINVAL;
> >  	}
> > @@ -543,6 +550,8 @@ static int ad9467_probe(struct spi_device *spi)
> >  	if (IS_ERR(st->clk))
> >  		return PTR_ERR(st->clk);
> >  
> > +	st->sample_rate = clk_get_rate(st->clk);
> > +
> >  	st->pwrdown_gpio = devm_gpiod_get_optional(&spi->dev, "powerdown",
> >  						   GPIOD_OUT_LOW);
> >  	if (IS_ERR(st->pwrdown_gpio))
> > 
>
Jonathan Cameron April 22, 2024, 5:08 p.m. UTC | #10
On Mon, 22 Apr 2024 17:46:02 +0200
Nuno Sá <noname.nuno@gmail.com> wrote:

> On Sat, 2024-04-20 at 16:19 +0100, Jonathan Cameron wrote:
> > On Fri, 19 Apr 2024 17:36:50 +0200
> > Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> >   
> > > From: Nuno Sa <nuno.sa@analog.com>
> > > 
> > > Since we allow to change the sampling frequency and do it with
> > > clk_round_rate(), we can cache it and use on the read_raw() interface.
> > > This will also be useful in a following patch supporting interface
> > > calibration.
> > > 
> > > Signed-off-by: Nuno Sa <nuno.sa@analog.com>  
> > 
> > The clk subsystem caches the clock rate in most cases anyway, so
> > I'm not sure why we need this.  Or it the point that you are going
> > to temporarily change it in the next patch?
> >   
> 
> The idea is that in the next patch I want to bail out early if we're not changing the
> rate (after clk_round_rate()) because I also don't want to re-run tuning in that
> case. Since the rate is not guaranteed to be cached (even though I think most clock
> providers don't set the NO_CACHE flag), I went this way. Anyways, this is minor so I
> can stick to clk_get_rate() if you prefer.

I'd make it local by retrieving current clock just before you consider writing
it.  We can optimize any need to cache it later.

Jonathan

> 
> > Patch looks fine, but I think a clearer requirements statement is
> > needed.
> > 
> > Jonathan
> > 
> >   
> > > ---
> > >  drivers/iio/adc/ad9467.c | 13 +++++++++++--
> > >  1 file changed, 11 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
> > > index 7475ec2a56c72..7db87ccc1ea4b 100644
> > > --- a/drivers/iio/adc/ad9467.c
> > > +++ b/drivers/iio/adc/ad9467.c
> > > @@ -117,6 +117,7 @@ struct ad9467_state {
> > >  	struct iio_backend		*back;
> > >  	struct spi_device		*spi;
> > >  	struct clk			*clk;
> > > +	unsigned long			sample_rate;
> > >  	unsigned int			output_mode;
> > >  	unsigned int                    (*scales)[2];
> > >  
> > > @@ -331,7 +332,7 @@ static int ad9467_read_raw(struct iio_dev *indio_dev,
> > >  	case IIO_CHAN_INFO_SCALE:
> > >  		return ad9467_get_scale(st, val, val2);
> > >  	case IIO_CHAN_INFO_SAMP_FREQ:
> > > -		*val = clk_get_rate(st->clk);
> > > +		*val = st->sample_rate;
> > >  
> > >  		return IIO_VAL_INT;
> > >  	default:
> > > @@ -346,6 +347,7 @@ static int ad9467_write_raw(struct iio_dev *indio_dev,
> > >  	struct ad9467_state *st = iio_priv(indio_dev);
> > >  	const struct ad9467_chip_info *info = st->info;
> > >  	long r_clk;
> > > +	int ret;
> > >  
> > >  	switch (mask) {
> > >  	case IIO_CHAN_INFO_SCALE:
> > > @@ -358,7 +360,12 @@ static int ad9467_write_raw(struct iio_dev *indio_dev,
> > >  			return -EINVAL;
> > >  		}
> > >  
> > > -		return clk_set_rate(st->clk, r_clk);
> > > +		ret = clk_set_rate(st->clk, r_clk);
> > > +		if (ret)
> > > +			return ret;
> > > +
> > > +		st->sample_rate = r_clk;
> > > +		return 0;
> > >  	default:
> > >  		return -EINVAL;
> > >  	}
> > > @@ -543,6 +550,8 @@ static int ad9467_probe(struct spi_device *spi)
> > >  	if (IS_ERR(st->clk))
> > >  		return PTR_ERR(st->clk);
> > >  
> > > +	st->sample_rate = clk_get_rate(st->clk);
> > > +
> > >  	st->pwrdown_gpio = devm_gpiod_get_optional(&spi->dev, "powerdown",
> > >  						   GPIOD_OUT_LOW);
> > >  	if (IS_ERR(st->pwrdown_gpio))
> > >   
> >   
>
Jonathan Cameron April 22, 2024, 5:13 p.m. UTC | #11
> >   
> > > + *
> > > + * Get the current state of the backend channel. Typically used to check if
> > > + * there were any errors sending/receiving data.
> > > + *
> > > + * RETURNS:
> > > + * 0 on success, negative error number on failure.
> > > + */
> > > +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> > > +			    struct iio_backend_chan_status *status)
> > > +{
> > > +	return iio_backend_op_call(back, chan_status, chan, status);
> > > +}
> > > +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> > > +
> > > +/**
> > > + * iio_backend_iodelay_set - Set digital I/O delay
> > > + * @back:	Backend device
> > > + * @lane:	Lane number
> > > + * @tap:	Number of taps
> > > + *
> > > + * Controls delays on sending/receiving data. One usecase for this is to
> > > + * calibrate the data digital interface so we get the best results when
> > > + * transferring data. Note that @tap has no unit since the actual delay per tap
> > > + * is very backend specific. Hence, frontend devices typically should go through
> > > + * an array of @taps (the size of that array should typically match the size of
> > > + * calibration points on the frontend device) and call this API.
> > > + *
> > > + * RETURNS:
> > > + * 0 on success, negative error number on failure.
> > > + */
> > > +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> > > +			    unsigned int tap)  
> > 
> > taps maybe given it's a number of them?  
> 
> yeps...
> 
> > Is this an industry standard term - sounds like it probably is but my
> > google fu is failing.
> >   
> 
> Not really (I think). It's very AMD/Xilinx specific. If you google for Xilinx IDELAY
> control you may found something. I could not find a good name (originally I just had
> 'delay' but without a proper unit it felt weird), so I admit I used the one it made
> more sense for my specific usecase. Open to suggestions though :).

Taps is fine.


> > >  
> > > +/* vendor specific from 32 */
> > > +enum iio_backend_test_pattern {
> > > +	/* modified prbs9 */
> > > +	IIO_BACKEND_ADI_PRBS_9A = 32,  
> > 
> > Not knowing anything much about this, does it make sense to use an enum,
> > or should we face facts that we can't have a true generic interface
> > and just use a suitably sized int?
> >   
> 
> I'm also not a fan of the above but we do have generic/standard patterns in this core
> (and that could be used by others):
> 
> - 0x0: pn9a (device specific, modified pn9)
> - 0x1: pn23a (device specific, modified pn23)
> - 0x4: pn7 (standard O.150)
> - 0x5: pn15 (standard O.150)
> - 0x6: pn23 (standard O.150)
> - 0x7: pn31 (standard O.150)
> - 0x9: pnX (device specific, e.g. ad9361)
> - 0x0A: Nibble ramp (Device specific e.g. adrv9001)
> - 0x0B: 16 bit ramp 
> 
> Lucky enough the user we have for this is only using a custom/modified pattern. my
> issue with the int is that how do frontends know what value do they need to pass into
> the API? It would really be very backend specific. I know we do expect frontends to
> have some assumed knowledge on the backend they're connected too but I would like to
> avoid making those assumptions bigger than they need to be.
> 
> My expectation with the enum is that we can have some "contract" between backends and
> frontends on the pattern to use. I guess we could give it a try (unless you have some
> other idea) and if it starts going out of control, I can assume defeat and change it
> to an int.
> 
> Or, is the idea to just have the int parameter and some plain defines in the backend
> header?

Keep it as an enum for now and let's see where this goes.  Things called 
'modified' are always ominous.  Modified how?  The standard defined ones
are easier to argue for.


> 
> > How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
> > or something like that.
> >   
> 
> Since this is on the input direction (and for our particular core), we don't have to
> unset it. When you choose a test pattern, it just tells the core to match for a
> specific signal/pattern. So when you do start getting "real" data, we may still have
> those status bits saying there are "errors" but in reality we don't care. We just
> care during the tuning/calibration procedure as we configure matching patters between
> frontend and backend...
> 
> OTOH for the axi-dac, for example, we do need to unset the test pattern. And we do
> that by (re)configuring the internal CW tone or the external data source (typically
> some DMA core).

Can we unset it for both input and output?  May make no difference, but easier to reason about
perhaps.

> 
> 
> > > +	IIO_BACKEND_TEST_PATTERN_MAX
> > > +};
Nuno Sá April 23, 2024, 7:27 a.m. UTC | #12
On Sat, 2024-04-20 at 16:13 +0100, Jonathan Cameron wrote:
> On Fri, 19 Apr 2024 17:36:49 +0200
> Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> 
> > From: Nuno Sa <nuno.sa@analog.com>
> > 
> > Implement the new IIO backend APIs for calibrating the data
> > digital interfaces.
> > 
> > While at it, removed the tabs in 'struct adi_axi_adc_state' and used
> > spaces for the members.
> 
> Ideally a precursor patch, but meh, it's 2 lines so I'll just moan about
> it and move on.
> 
> A few minor things inline.
> 
> 
> > 
> > Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> > ---
> >  drivers/iio/adc/adi-axi-adc.c | 113 +++++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 111 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
> > index b312369b7366..d58fa05499c4 100644
> > --- a/drivers/iio/adc/adi-axi-adc.c
> > +++ b/drivers/iio/adc/adi-axi-adc.c
> > @@ -7,11 +7,13 @@
> >   */
> >  
> >  #include <linux/bitfield.h>
> > +#include <linux/cleanup.h>
> >  #include <linux/clk.h>
> >  #include <linux/err.h>
> >  #include <linux/io.h>
> >  #include <linux/delay.h>
> >  #include <linux/module.h>
> > +#include <linux/mutex.h>
> >  #include <linux/of.h>
> >  #include <linux/platform_device.h>
> >  #include <linux/property.h>
> > @@ -37,6 +39,9 @@
> >  #define   ADI_AXI_REG_RSTN_MMCM_RSTN		BIT(1)
> >  #define   ADI_AXI_REG_RSTN_RSTN			BIT(0)
> >  
> > +#define ADI_AXI_ADC_REG_CTRL			0x0044
> > +#define    ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK	BIT(1)
> > +
> >  /* ADC Channel controls */
> >  
> >  #define ADI_AXI_REG_CHAN_CTRL(c)		(0x0400 + (c) * 0x40)
> > @@ -51,14 +56,28 @@
> >  #define   ADI_AXI_REG_CHAN_CTRL_PN_TYPE_OWR	BIT(1)
> >  #define   ADI_AXI_REG_CHAN_CTRL_ENABLE		BIT(0)
> >  
> > +#define ADI_AXI_ADC_REG_CHAN_STATUS(c)		(0x0404 + (c) * 0x40)
> > +#define   ADI_AXI_ADC_CHAN_STAT_PN_MASK		GENMASK(2, 1)
> > +
> > +#define ADI_AXI_ADC_REG_CHAN_CTRL_3(c)		(0x0418 + (c) * 0x40)
> > +#define   ADI_AXI_ADC_CHAN_PN_SEL_MASK		GENMASK(19, 16)
> > +
> > +/* IO Delays */
> > +#define ADI_AXI_ADC_REG_DELAY(l)		(0x0800 + (l) * 0x4)
> > +#define   AXI_ADC_DELAY_CTRL_MASK		GENMASK(4, 0)
> > +
> > +#define ADI_AXI_ADC_MAX_IO_NUM_LANES		15
> > +
> >  #define ADI_AXI_REG_CHAN_CTRL_DEFAULTS		\
> >  	(ADI_AXI_REG_CHAN_CTRL_FMT_SIGNEXT |	\
> >  	 ADI_AXI_REG_CHAN_CTRL_FMT_EN |		\
> >  	 ADI_AXI_REG_CHAN_CTRL_ENABLE)
> >  
> >  struct adi_axi_adc_state {
> > -	struct regmap				*regmap;
> > -	struct device				*dev;
> > +	struct regmap *regmap;
> > +	struct device *dev;
> > +	/* lock to protect multiple accesses to the device registers */
> 
> Why?  The locking in regmap protects register accesses in general I believe.
> I guess this is to prevent accesses during that error detection routine?
> Needs more detail in the comment.

Yes, but if you have, for example, two regmap_*() calls in a row you won't have the
complete access protected as regmap only internally protects each call. And often,
trying to argue which of these multiple accesses are safe or not is very subtle and
prone to issues. That's why I used the "multiple accesses" wording.

As you pointed out, the error detection routine should indeed be atomic. On the
iodelay_set() we also have a read after write and this is one of those cases I'm not
sure we could actually have an issue if we have concurrent calls (maybe not), But for
correctness, it definitely makes sense for it to be atomic (as we should check we
could set the value we just wrote and not something else).

Also, on a second look, the enable/disable() routines should also be protected (for
correctness). If we think on the theoretical (in practice it should not happen :))
case of concurrent enable/disable() calls, we should not be able to disable the core
in the middle of enabling (either before or after).

> 
> > +	struct mutex lock;
> >  };
> >  
> >  static int axi_adc_enable(struct iio_backend *back)
> > @@ -104,6 +123,92 @@ static int axi_adc_data_format_set(struct iio_backend *back,
> > unsigned int chan,
> >  				  ADI_AXI_REG_CHAN_CTRL_FMT_MASK, val);
> >  }
> >  
> > +static int axi_adc_data_sample_trigger(struct iio_backend *back,
> > +				       enum iio_backend_sample_trigger trigger)
> > +{
> > +	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
> > +
> > +	if (trigger == IIO_BACKEND_SAMPLE_TRIGGER_EDGE_RISING)
> 
> It's an enum, so can't have more than one. Hence switch statement is probably
> more extensible and natural to read.

alright..

> 
> > +		return regmap_clear_bits(st->regmap, ADI_AXI_ADC_REG_CTRL,
> > +					 ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK);
> > +	if (trigger == IIO_BACKEND_SAMPLE_TRIGGER_EDGE_FALLING)
> > +		return regmap_set_bits(st->regmap, ADI_AXI_ADC_REG_CTRL,
> > +				       ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK);
> > +
> > +	return -EINVAL;
> > +}
> > +
> 
> 
> > +static int axi_adc_chan_status(struct iio_backend *back, unsigned int chan,
> > +			       struct iio_backend_chan_status *status)
> > +{
> > +	struct adi_axi_adc_state *st = iio_backend_get_priv(back);
> > +	int ret;
> > +	u32 val;
> > +
> > +	guard(mutex)(&st->lock);
> > +	/* reset test bits by setting them */
> > +	ret = regmap_write(st->regmap, ADI_AXI_ADC_REG_CHAN_STATUS(chan),
> > +			   ADI_AXI_ADC_CHAN_STAT_PN_MASK);
> > +	if (ret)
> > +		return ret;
> > +
> > +	fsleep(1000);
> Why this particular length sleep?  Is this a case of let it sit a while an dsee
> if an error shows up?  If so a comment on that would be good.

Exactly, will add a comment.

- Nuno Sá
>
Nuno Sá April 23, 2024, 7:32 a.m. UTC | #13
On Sat, 2024-04-20 at 16:34 +0100, Jonathan Cameron wrote:
> On Fri, 19 Apr 2024 17:36:51 +0200
> Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> 
> > From: Nuno Sa <nuno.sa@analog.com>
> > 
> > To make sure that we have the best timings on the serial data interface
> > we should calibrate it. This means going through the device supported
> > values and see for which ones we get a successful result. To do that, we
> > use a prbs test pattern both in the IIO backend and in the frontend
> > devices. Then for each of the test points we see if there are any
> > errors. Note that the backend is responsible to look for those errors.
> > 
> > As calibrating the interface also requires that the data format is disabled
> > (the one thing being done in ad9467_setup()), ad9467_setup() was removed
> > and configuring the data fomat is now part of the calibration process.
> > 
> > Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> Trivial comments inline.
> 
> Jonathan
> 
> > ---
> >  drivers/iio/adc/ad9467.c | 337 +++++++++++++++++++++++++++++++++++++++++------
> >  1 file changed, 296 insertions(+), 41 deletions(-)
> > 
> > diff --git a/drivers/iio/adc/ad9467.c b/drivers/iio/adc/ad9467.c
> > index 7db87ccc1ea4..44552dd6f4c6 100644
> > --- a/drivers/iio/adc/ad9467.c
> > +++ b/drivers/iio/adc/ad9467.c
> > @@ -4,6 +4,9 @@
> >   *
> >   * Copyright 2012-2020 Analog Devices Inc.
> >   */
> > +
> > +#include <linux/bitmap.h>
> > +#include <linux/bitops.h>
> >  #include <linux/cleanup.h>
> >  #include <linux/module.h>
> >  #include <linux/mutex.h>
> > @@ -100,6 +103,8 @@
> >  #define AD9467_DEF_OUTPUT_MODE		0x08
> >  #define AD9467_REG_VREF_MASK		0x0F
> >  
> > +#define AD9647_MAX_TEST_POINTS		32
> > +
> >  struct ad9467_chip_info {
> >  	const char		*name;
> >  	unsigned int		id;
> > @@ -110,6 +115,8 @@ struct ad9467_chip_info {
> >  	unsigned long		max_rate;
> >  	unsigned int		default_output_mode;
> >  	unsigned int		vref_mask;
> > +	unsigned int		num_lanes;
> > +	bool			has_dco;
> 
> What is dco?  Perhaps a comment, or expand the naming somewhat.
> 

data clock output.. will add a comment

> >  };
> 
> >  
> > +static void ad9467_dump_table(const unsigned long *err_map, unsigned int size,
> > +			      bool invert)
> > +{
> > +#ifdef DEBUG
> > +	unsigned int bit;
> > +
> > +	pr_debug("Dump calibration table:\n");
> 
> If it's useful, poke it in debugfs, otherwise, drop this code.

From our experience, it's useful to dump the table in the tuning process. But I guess
I can also just save the bitmap and dump it on demand (on debugfs).

> 
> > +	for (bit = 0; bit < size; bit++) {
> > +		if (bit == size / 2) {
> > +			if (!invert)
> > +				break;
> > +			pr_cont("\n");
> > +		}
> > +
> > +		pr_cont("%c", test_bit(bit, err_map) ? 'x' : 'o');
> > +	}
> > +#endif
> > +}
> > +
> 
> > +static int ad9467_calibrate_apply(const struct ad9467_state *st,
> > +				  unsigned int val)
> > +{
> > +	unsigned int lane;
> > +	int ret;
> > +
> > +	if (st->info->has_dco) {
> > +		int ret;
> Shadowing ret above.
> 

ups...

> > +
> > +		ret = ad9467_spi_write(st->spi, AN877_ADC_REG_OUTPUT_DELAY,
> > +				       val);
> > +		if (ret)
> > +			return ret;
> > +
> > +		return ad9467_spi_write(st->spi, AN877_ADC_REG_TRANSFER,
> > +					AN877_ADC_TRANSFER_SYNC);
> > +	}
> > +
> > +	for (lane = 0; lane < st->info->num_lanes; lane++) {
> > +		ret = iio_backend_iodelay_set(st->back, lane, val);
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> 
> > +
> > +static int ad9467_calibrate(const struct ad9467_state *st)
> 
> Some docs on the sequence or a reference would be good.

I can point into the datasheets..

- Nuno Sá
Nuno Sá April 23, 2024, 7:52 a.m. UTC | #14
On Mon, 2024-04-22 at 18:13 +0100, Jonathan Cameron wrote:
> 
> > >   
> > > > + *
> > > > + * Get the current state of the backend channel. Typically used to check if
> > > > + * there were any errors sending/receiving data.
> > > > + *
> > > > + * RETURNS:
> > > > + * 0 on success, negative error number on failure.
> > > > + */
> > > > +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> > > > +			    struct iio_backend_chan_status *status)
> > > > +{
> > > > +	return iio_backend_op_call(back, chan_status, chan, status);
> > > > +}
> > > > +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> > > > +
> > > > +/**
> > > > + * iio_backend_iodelay_set - Set digital I/O delay
> > > > + * @back:	Backend device
> > > > + * @lane:	Lane number
> > > > + * @tap:	Number of taps
> > > > + *
> > > > + * Controls delays on sending/receiving data. One usecase for this is to
> > > > + * calibrate the data digital interface so we get the best results when
> > > > + * transferring data. Note that @tap has no unit since the actual delay per
> > > > tap
> > > > + * is very backend specific. Hence, frontend devices typically should go
> > > > through
> > > > + * an array of @taps (the size of that array should typically match the size
> > > > of
> > > > + * calibration points on the frontend device) and call this API.
> > > > + *
> > > > + * RETURNS:
> > > > + * 0 on success, negative error number on failure.
> > > > + */
> > > > +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> > > > +			    unsigned int tap)  
> > > 
> > > taps maybe given it's a number of them?  
> > 
> > yeps...
> > 
> > > Is this an industry standard term - sounds like it probably is but my
> > > google fu is failing.
> > >   
> > 
> > Not really (I think). It's very AMD/Xilinx specific. If you google for Xilinx
> > IDELAY
> > control you may found something. I could not find a good name (originally I just
> > had
> > 'delay' but without a proper unit it felt weird), so I admit I used the one it
> > made
> > more sense for my specific usecase. Open to suggestions though :).
> 
> Taps is fine.
> 
> 
> > > >  
> > > > +/* vendor specific from 32 */
> > > > +enum iio_backend_test_pattern {
> > > > +	/* modified prbs9 */
> > > > +	IIO_BACKEND_ADI_PRBS_9A = 32,  
> > > 
> > > Not knowing anything much about this, does it make sense to use an enum,
> > > or should we face facts that we can't have a true generic interface
> > > and just use a suitably sized int?
> > >   
> > 
> > I'm also not a fan of the above but we do have generic/standard patterns in this
> > core
> > (and that could be used by others):
> > 
> > - 0x0: pn9a (device specific, modified pn9)
> > - 0x1: pn23a (device specific, modified pn23)
> > - 0x4: pn7 (standard O.150)
> > - 0x5: pn15 (standard O.150)
> > - 0x6: pn23 (standard O.150)
> > - 0x7: pn31 (standard O.150)
> > - 0x9: pnX (device specific, e.g. ad9361)
> > - 0x0A: Nibble ramp (Device specific e.g. adrv9001)
> > - 0x0B: 16 bit ramp 
> > 
> > Lucky enough the user we have for this is only using a custom/modified pattern.
> > my
> > issue with the int is that how do frontends know what value do they need to pass
> > into
> > the API? It would really be very backend specific. I know we do expect frontends
> > to
> > have some assumed knowledge on the backend they're connected too but I would like
> > to
> > avoid making those assumptions bigger than they need to be.
> > 
> > My expectation with the enum is that we can have some "contract" between backends
> > and
> > frontends on the pattern to use. I guess we could give it a try (unless you have
> > some
> > other idea) and if it starts going out of control, I can assume defeat and change
> > it
> > to an int.
> > 
> > Or, is the idea to just have the int parameter and some plain defines in the
> > backend
> > header?
> 
> Keep it as an enum for now and let's see where this goes.  Things called 
> 'modified' are always ominous.  Modified how?  The standard defined ones
> are easier to argue for.
> 
> 
> > 
> > > How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
> > > or something like that.
> > >   
> > 
> > Since this is on the input direction (and for our particular core), we don't have
> > to
> > unset it. When you choose a test pattern, it just tells the core to match for a
> > specific signal/pattern. So when you do start getting "real" data, we may still
> > have
> > those status bits saying there are "errors" but in reality we don't care. We just
> > care during the tuning/calibration procedure as we configure matching patters
> > between
> > frontend and backend...
> > 
> > OTOH for the axi-dac, for example, we do need to unset the test pattern. And we
> > do
> > that by (re)configuring the internal CW tone or the external data source
> > (typically
> > some DMA core).
> 
> Can we unset it for both input and output?  May make no difference, but easier to
> reason about
> perhaps.
> 

Yeah, from an API point of view it would make sense for frontends to explicitly set
IIO_BACKEND_NO_TESTPATERN after they are done with it. On the input device (and on
the ADI specific core) that would be a no-op. But for the output device things become
a bit more ambiguous. On the ADI axi-dac, I guess this would mean setting the
internal CW tone (as tuning is not expected to happen during buffering and the
internal CW tone is the default data source).

Yeah, there's a bit of overlapping between tuning and [1]. While from an output
device point of view, it could make sense to have the tuning test patterns as part of
the internal signals, for an input device, that would not make much sense (I think).
Hence, I decided to have the test pattern separated from the data source enum. But I
guess now is the correct time to bring this up so we can decide otherwise :).

Also, on a second thought, on the axi-dac driver, calling
axi_dac_data_source_set(..., IIO_BACKEND_INTERNAL_CONTINUOS_WAVE) on
IIO_BACKEND_NO_TESTPATERN does not look that wrong...

[2]: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git/tree/drivers/iio/dac/adi-axi-dac.c?h=testing#n449
[1]: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git/tree/include/linux/iio/backend.h?h=testing#n19

- Nuno Sá
Jonathan Cameron April 28, 2024, 3:46 p.m. UTC | #15
On Tue, 23 Apr 2024 09:52:09 +0200
Nuno Sá <noname.nuno@gmail.com> wrote:

> On Mon, 2024-04-22 at 18:13 +0100, Jonathan Cameron wrote:
> >   
> > > >     
> > > > > + *
> > > > > + * Get the current state of the backend channel. Typically used to check if
> > > > > + * there were any errors sending/receiving data.
> > > > > + *
> > > > > + * RETURNS:
> > > > > + * 0 on success, negative error number on failure.
> > > > > + */
> > > > > +int iio_backend_chan_status(struct iio_backend *back, unsigned int chan,
> > > > > +			    struct iio_backend_chan_status *status)
> > > > > +{
> > > > > +	return iio_backend_op_call(back, chan_status, chan, status);
> > > > > +}
> > > > > +EXPORT_SYMBOL_NS_GPL(iio_backend_chan_status, IIO_BACKEND);
> > > > > +
> > > > > +/**
> > > > > + * iio_backend_iodelay_set - Set digital I/O delay
> > > > > + * @back:	Backend device
> > > > > + * @lane:	Lane number
> > > > > + * @tap:	Number of taps
> > > > > + *
> > > > > + * Controls delays on sending/receiving data. One usecase for this is to
> > > > > + * calibrate the data digital interface so we get the best results when
> > > > > + * transferring data. Note that @tap has no unit since the actual delay per
> > > > > tap
> > > > > + * is very backend specific. Hence, frontend devices typically should go
> > > > > through
> > > > > + * an array of @taps (the size of that array should typically match the size
> > > > > of
> > > > > + * calibration points on the frontend device) and call this API.
> > > > > + *
> > > > > + * RETURNS:
> > > > > + * 0 on success, negative error number on failure.
> > > > > + */
> > > > > +int iio_backend_iodelay_set(struct iio_backend *back, unsigned int lane,
> > > > > +			    unsigned int tap)    
> > > > 
> > > > taps maybe given it's a number of them?    
> > > 
> > > yeps...
> > >   
> > > > Is this an industry standard term - sounds like it probably is but my
> > > > google fu is failing.
> > > >     
> > > 
> > > Not really (I think). It's very AMD/Xilinx specific. If you google for Xilinx
> > > IDELAY
> > > control you may found something. I could not find a good name (originally I just
> > > had
> > > 'delay' but without a proper unit it felt weird), so I admit I used the one it
> > > made
> > > more sense for my specific usecase. Open to suggestions though :).  
> > 
> > Taps is fine.
> > 
> >   
> > > > >  
> > > > > +/* vendor specific from 32 */
> > > > > +enum iio_backend_test_pattern {
> > > > > +	/* modified prbs9 */
> > > > > +	IIO_BACKEND_ADI_PRBS_9A = 32,    
> > > > 
> > > > Not knowing anything much about this, does it make sense to use an enum,
> > > > or should we face facts that we can't have a true generic interface
> > > > and just use a suitably sized int?
> > > >     
> > > 
> > > I'm also not a fan of the above but we do have generic/standard patterns in this
> > > core
> > > (and that could be used by others):
> > > 
> > > - 0x0: pn9a (device specific, modified pn9)
> > > - 0x1: pn23a (device specific, modified pn23)
> > > - 0x4: pn7 (standard O.150)
> > > - 0x5: pn15 (standard O.150)
> > > - 0x6: pn23 (standard O.150)
> > > - 0x7: pn31 (standard O.150)
> > > - 0x9: pnX (device specific, e.g. ad9361)
> > > - 0x0A: Nibble ramp (Device specific e.g. adrv9001)
> > > - 0x0B: 16 bit ramp 
> > > 
> > > Lucky enough the user we have for this is only using a custom/modified pattern.
> > > my
> > > issue with the int is that how do frontends know what value do they need to pass
> > > into
> > > the API? It would really be very backend specific. I know we do expect frontends
> > > to
> > > have some assumed knowledge on the backend they're connected too but I would like
> > > to
> > > avoid making those assumptions bigger than they need to be.
> > > 
> > > My expectation with the enum is that we can have some "contract" between backends
> > > and
> > > frontends on the pattern to use. I guess we could give it a try (unless you have
> > > some
> > > other idea) and if it starts going out of control, I can assume defeat and change
> > > it
> > > to an int.
> > > 
> > > Or, is the idea to just have the int parameter and some plain defines in the
> > > backend
> > > header?  
> > 
> > Keep it as an enum for now and let's see where this goes.  Things called 
> > 'modified' are always ominous.  Modified how?  The standard defined ones
> > are easier to argue for.
> > 
> >   
> > >   
> > > > How do you unset the test pattern? I expected a IIO_BACKEND_NO_TESTPATERN = 0
> > > > or something like that.
> > > >     
> > > 
> > > Since this is on the input direction (and for our particular core), we don't have
> > > to
> > > unset it. When you choose a test pattern, it just tells the core to match for a
> > > specific signal/pattern. So when you do start getting "real" data, we may still
> > > have
> > > those status bits saying there are "errors" but in reality we don't care. We just
> > > care during the tuning/calibration procedure as we configure matching patters
> > > between
> > > frontend and backend...
> > > 
> > > OTOH for the axi-dac, for example, we do need to unset the test pattern. And we
> > > do
> > > that by (re)configuring the internal CW tone or the external data source
> > > (typically
> > > some DMA core).  
> > 
> > Can we unset it for both input and output?  May make no difference, but easier to
> > reason about
> > perhaps.
> >   
> 
> Yeah, from an API point of view it would make sense for frontends to explicitly set
> IIO_BACKEND_NO_TESTPATERN after they are done with it. On the input device (and on
> the ADI specific core) that would be a no-op. But for the output device things become
> a bit more ambiguous. On the ADI axi-dac, I guess this would mean setting the
> internal CW tone (as tuning is not expected to happen during buffering and the
> internal CW tone is the default data source).
> 
> Yeah, there's a bit of overlapping between tuning and [1]. While from an output
> device point of view, it could make sense to have the tuning test patterns as part of
> the internal signals, for an input device, that would not make much sense (I think).
> Hence, I decided to have the test pattern separated from the data source enum. But I
> guess now is the correct time to bring this up so we can decide otherwise :).
> 
> Also, on a second thought, on the axi-dac driver, calling
> axi_dac_data_source_set(..., IIO_BACKEND_INTERNAL_CONTINUOS_WAVE) on
> IIO_BACKEND_NO_TESTPATERN does not look that wrong...
> 

If that's the default for prior to starting tuning, then that seems a reasonable
place to go back to I think.  Maybe this doesn't matter and implementations that
don't care can leave the test pattern in place.

Jonathan

> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git/tree/drivers/iio/dac/adi-axi-dac.c?h=testing#n449
> [1]: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git/tree/include/linux/iio/backend.h?h=testing#n19
> 
> - Nuno Sá
>
Jonathan Cameron April 28, 2024, 3:49 p.m. UTC | #16
On Tue, 23 Apr 2024 09:27:26 +0200
Nuno Sá <noname.nuno@gmail.com> wrote:

> On Sat, 2024-04-20 at 16:13 +0100, Jonathan Cameron wrote:
> > On Fri, 19 Apr 2024 17:36:49 +0200
> > Nuno Sa via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> >   
> > > From: Nuno Sa <nuno.sa@analog.com>
> > > 
> > > Implement the new IIO backend APIs for calibrating the data
> > > digital interfaces.
> > > 
> > > While at it, removed the tabs in 'struct adi_axi_adc_state' and used
> > > spaces for the members.  
> > 
> > Ideally a precursor patch, but meh, it's 2 lines so I'll just moan about
> > it and move on.
> > 
> > A few minor things inline.
> > 
> >   
> > > 
> > > Signed-off-by: Nuno Sa <nuno.sa@analog.com>
> > > ---
> > >  drivers/iio/adc/adi-axi-adc.c | 113 +++++++++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 111 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
> > > index b312369b7366..d58fa05499c4 100644
> > > --- a/drivers/iio/adc/adi-axi-adc.c
> > > +++ b/drivers/iio/adc/adi-axi-adc.c
> > > @@ -7,11 +7,13 @@
> > >   */
> > >  
> > >  #include <linux/bitfield.h>
> > > +#include <linux/cleanup.h>
> > >  #include <linux/clk.h>
> > >  #include <linux/err.h>
> > >  #include <linux/io.h>
> > >  #include <linux/delay.h>
> > >  #include <linux/module.h>
> > > +#include <linux/mutex.h>
> > >  #include <linux/of.h>
> > >  #include <linux/platform_device.h>
> > >  #include <linux/property.h>
> > > @@ -37,6 +39,9 @@
> > >  #define   ADI_AXI_REG_RSTN_MMCM_RSTN		BIT(1)
> > >  #define   ADI_AXI_REG_RSTN_RSTN			BIT(0)
> > >  
> > > +#define ADI_AXI_ADC_REG_CTRL			0x0044
> > > +#define    ADI_AXI_ADC_CTRL_DDR_EDGESEL_MASK	BIT(1)
> > > +
> > >  /* ADC Channel controls */
> > >  
> > >  #define ADI_AXI_REG_CHAN_CTRL(c)		(0x0400 + (c) * 0x40)
> > > @@ -51,14 +56,28 @@
> > >  #define   ADI_AXI_REG_CHAN_CTRL_PN_TYPE_OWR	BIT(1)
> > >  #define   ADI_AXI_REG_CHAN_CTRL_ENABLE		BIT(0)
> > >  
> > > +#define ADI_AXI_ADC_REG_CHAN_STATUS(c)		(0x0404 + (c) * 0x40)
> > > +#define   ADI_AXI_ADC_CHAN_STAT_PN_MASK		GENMASK(2, 1)
> > > +
> > > +#define ADI_AXI_ADC_REG_CHAN_CTRL_3(c)		(0x0418 + (c) * 0x40)
> > > +#define   ADI_AXI_ADC_CHAN_PN_SEL_MASK		GENMASK(19, 16)
> > > +
> > > +/* IO Delays */
> > > +#define ADI_AXI_ADC_REG_DELAY(l)		(0x0800 + (l) * 0x4)
> > > +#define   AXI_ADC_DELAY_CTRL_MASK		GENMASK(4, 0)
> > > +
> > > +#define ADI_AXI_ADC_MAX_IO_NUM_LANES		15
> > > +
> > >  #define ADI_AXI_REG_CHAN_CTRL_DEFAULTS		\
> > >  	(ADI_AXI_REG_CHAN_CTRL_FMT_SIGNEXT |	\
> > >  	 ADI_AXI_REG_CHAN_CTRL_FMT_EN |		\
> > >  	 ADI_AXI_REG_CHAN_CTRL_ENABLE)
> > >  
> > >  struct adi_axi_adc_state {
> > > -	struct regmap				*regmap;
> > > -	struct device				*dev;
> > > +	struct regmap *regmap;
> > > +	struct device *dev;
> > > +	/* lock to protect multiple accesses to the device registers */  
> > 
> > Why?  The locking in regmap protects register accesses in general I believe.
> > I guess this is to prevent accesses during that error detection routine?
> > Needs more detail in the comment.  
> 
> Yes, but if you have, for example, two regmap_*() calls in a row you won't have the
> complete access protected as regmap only internally protects each call. And often,
> trying to argue which of these multiple accesses are safe or not is very subtle and
> prone to issues. That's why I used the "multiple accesses" wording.

Ah.  You meant protecting a sequence of multiple accesses. I was thinking 'against'
multiple simultaneous accesses.  I'm fine with your intended meaning. Perhaps
we just need to refer to sequences of access or something like that.

> 
> As you pointed out, the error detection routine should indeed be atomic. On the
> iodelay_set() we also have a read after write and this is one of those cases I'm not
> sure we could actually have an issue if we have concurrent calls (maybe not), But for
> correctness, it definitely makes sense for it to be atomic (as we should check we
> could set the value we just wrote and not something else).
> 
> Also, on a second look, the enable/disable() routines should also be protected (for
> correctness). If we think on the theoretical (in practice it should not happen :))
> case of concurrent enable/disable() calls, we should not be able to disable the core
> in the middle of enabling (either before or after).

Protection makes sense, I was just moaning about the meaning of the comment :)

Jonathan