The Mechanics of the DS1870 Look Up Table
Abstract
This application note explains the control path, from input to output, of the DS1870 LDMOS RF Power-Amplifier Bias Controller. Included is a discussion of internal calibration.
Introduction
The DS1870 is used for biasing RF-power amplifiers. This application note intends to show how the 2-dimensional lookup tables work as an example.
Input to Output Path
Figure 1 shows the path from a sensed input to the setting of the output pot's wiper position.
Figure 1.
All inputs are single ended and referenced to ground. Two signals of particular importance for this example are ID1 and the On-Chip Temp Sensor. These analog signals are MUXed and then fed thru a programmable gain stage, on the analog side of the A/D. Once digitized by the A/D an offset correction is applied on the digital side of the A/D. Both gain and offset corrections (scaling) are programmable, with each signal having its own scale for gain and offset, thru a calibration procedure explained below. The digital values obtained thus far are used as pointers to index thru 2 lookup tables: one table is indexed by the temperature value, and the other by the ID1 value. As the pointer moves from one location to another the content of the register being pointed at is bused into an adder. Therefore the adder will sum up the contents of 2 registers at any given time responding to changes in temperature and to changes in ID1. The resulting value in the adder indicates the position of the wiper of the pot.
Note that ID1 could be any voltage signal, whether representing current or external temperature or any other variable.
Scaling and Calibration
Every signal is scaled by the gain and offset programmed in during calibration. Signals ID1, ID2, VD...can be scaled individually whether they are as low as 250mV or as high as 2.5V. This allows for a more optimal utilization of the A/D.
Calibration of the input variable is explained on p. 12 of the data sheet under "voltage monitor calibration". However here is a perspective that should shed additional light on it. Essentially it is a 2-point calibration that is done repetitively (Lo, High, Lo, High) so that, with each point pair (lo analog input/lo digital output, hi analog input/hi digital output), a bit in the scale (gain) register (registers in table 1 p.17) is fixed thru a successive quantization. The process goes like this:
set offset register to 0h >>> Loop starts here {set analog input to 0, read digitized Meas1 >>> set analog input to 0.225 (given an example FS = 0.25) ,
read digitized Meas2 >>> If (Meas2 - Meas1) > expected delta CNT2 - CNT1,
then MSB in Scale register is 0, otherwise it is 1 } >>> Repeat loop for the next significant bit until all bits in the scale register are set.
Finally after the last scale bit is set go ahead and set the offset register to the value that brings Meas1 to 0, given that the input is 0.
Note above that the analog input is 0 it is not necessarily referring to the input voltage, but rather to the minimum "real"
quantity that is measured (current, temperature). Expected delta refers to the delta