FALC (Fourier Analysis of Light Curves) This program performs a Fourier analysis of lightcurve data, in units of astronomical magnitudes. It is possible to do a linear least squares fit for a specified period up to any harmonic order, so the program is used by grid-searching linear solutions in terms of period or harmonic order. The input format of a data file can be seen in the examples included. The program uses only the time tag in Julian days (-2.4e6), the magnitude, and the error bar of the magnitude. If no error bar is given, the program inserts 0.01 for the "sigma". On input, only records starting with the letter "V" are treated as data. In particular, records starting with "X" are skipped. I use "X" to remove data points from the fit without deleting them from the archival data file. Among header records (A or C), only the last-read A record before data is used, and on it only asteroid name and date (used for labels) and solar phase angle (col. 36-41) are read. The latter is used only to generate to "FAZ" output file, so it is not necessary for the Fourier fit itself. Following is a sample of an input file: A45 .812 .69049 10 51 8. +12 15 48. 0. 31. 82 4 24 C45 4.292 .759 .69042 10 51 2. +12 14 50. 0. 0. 82 4 24 A45 APR2482.EXT 2.5806 1.8627 18.49 .018 11.929 .30 .020 .020 C45 V V 41506. 45083.67715 1.079 8.427 .013 3.1 .000 V 50956. 45083.71524 1.103 8.356 .011 1.3 -.002 V 61239. 45083.75879 1.207 8.403 .009 1.0 -.002 V 63315. 45083.77309 1.264 8.440 .013 1.1 -.003 V 75108. 45083.82718 1.656 8.340 .011 1.1 -.010 8.385 .030-76.8 .389 V 82453. 45083.85062 1.985 8.303 .012 1.2 -.016 8.238 .016-32.6 .319 A45 .830 .68776 10 51 8. +12 16 19. 2. 20. 82 4 25 C45 4.313 1.000 .68769 10 51 2. +12 14 50. 0. 0. 82 4 25 A45 APR2582.EXT 2.5798 1.8724 18.73 .018 11.929 .30 .010 .020 C45 V V 44826. 45084.70030 1.089 8.377 .011 1.2 -.001 V 52312. 45084.72444 1.123 8.456 .007 1.4 -.002 V 60660. 45084.75486 1.202 8.474 .009 1.4 -.004 V 65107. 45084.78550 1.341 8.331 .008 1.4 -.006 XA45 .830 .67971 10 51 19. +12 16 46. 6. -13. 82 4 28 XC45 4.314 1.000 .67950 10 51 2. +12 14 50. 0. 0. 82 4 28 XA45 APR2882.EXT 2.5776 1.9021 19.41 .018 11.929 .30 .010 .020 C45 V XV 52446. 45087.72553 1.142 8.774 .015 1.3 -.003 8.802 .025 68.5 -.177 XV 63202. 45087.77224 1.313 8.786 .010 1.3 -.006 8.758 .027-66.0 .168 XV 65202. 45087.78614 1.396 8.789 .013 1.2 -.007 XV 72941. 45087.81228 1.614 8.774 .010 1.2 -.012 WRONG OBJECT XV 74301. 45087.82154 1.719 8.763 .012 1.0 -.012 XV 82942. 45087.85396 2.280 8.754 .016 1.1 -.026 ^ ^ ^ (^) rec. type JD-2.4e6 phase mag sig Only the columns marked are essential, and the one marked (^) is the sigma value, optional but used if given. Each time the program encounters a header line (A or C), a new magnitude level is assumed, that is, another free solution parameter (the zero point magnitude for another day) is introduced. If you want to constrain the solution according to an estimated magnitude level, i.e. if the same comparison star was used and the change in phase angle from one day to the next is minimal, you can adjust the magnitudes of the second day according to a phase angle model, put X's in column one of the second set of header records, and the program will continue treating the second day data as if it were a single day's data, tied to only one solution magnitude. Here's how it would look for the first two days above, assuming a phase angle law of 0.030 mag./deg.: A45 .812 .69049 10 51 8. +12 15 48. 0. 31. 82 4 24 C45 4.292 .759 .69042 10 51 2. +12 14 50. 0. 0. 82 4 24 A45 APR2482.EXT 2.5806 1.8627 18.49 .018 11.929 .30 .020 .020 C45 V V 41506. 45083.67715 1.079 8.427 .013 3.1 .000 V 50956. 45083.71524 1.103 8.356 .011 1.3 -.002 V 61239. 45083.75879 1.207 8.403 .009 1.0 -.002 V 63315. 45083.77309 1.264 8.440 .013 1.1 -.003 V 75108. 45083.82718 1.656 8.340 .011 1.1 -.010 8.385 .030-76.8 .389 V 82453. 45083.85062 1.985 8.303 .012 1.2 -.016 8.238 .016-32.6 .319 X45 .830 .68776 10 51 8. +12 16 19. 2. 20. 82 4 25 X45 4.313 1.000 .68769 10 51 2. +12 14 50. 0. 0. 82 4 25 X45 APR2582.EXT 2.5798 1.8724 18.73 .018 11.929 .30 .010 .020 C45 V V 44826. 45084.70030 1.089 8.370 .011 1.2 -.001 Mags adjusted -0.007 V 52312. 45084.72444 1.123 8.449 .007 1.4 -.002 according to phase law V 60660. 45084.75486 1.202 8.467 .009 1.4 -.004 0.030 mag/deg V 65107. 45084.78550 1.341 8.324 .008 1.4 -.006 The above block would be read as a single lightcurve referenced to a single magnitude level, assumed to be at phase angle 18.49 deg, the phase angle given for the first day. The program runs from a DOS command prompt (sorry, no windows). After calling for an input file, the program displays the file as it reads it, then provides a summary and calls for input with lines that look like this: --------------------------------------- NLC = 4 NTOT = 26 Enter fit order, period, scan increment, scan length, and plot scale below current values (No entry to leave unchanged, negative fit order for period scan, letter entry to display solution): 4 10.00000 .01000 10 10.00 ---------------------------------------- This was generated using the file 45tmo.82 as input. NLC is the number of individual lightcurves (independent values of absolute magnitude) to be fitted. If the file had been revised as in the example above, NLC would be reduced to 3 and the first two days' data would be treated as one lightcurve. NTOT is the number of data points. The last line is a set of default values to be used for doing a solution scan. 4 is the harmonic order (NFIT) of the solution to try. I normally start with 4, but anything up to around 15 or so can be used. N.B., the total number of solution parameters is 2*NFIT + NLC. If you changed NFIT = 10 in the example above, the total number of solution variables would be 2*10 + 4 = 24, dangerously close to the total number of data points, 26. Normally you should keep to not more than around half as many solution parameters as you have data points, and less is better. NFIT = 4 results in 12 solution parameters, probably as many as are safe in the above example. The next two numbers are the start period to try and the period increment; in the example P = 10.00000 hours and DP = 0.01000 hours. The next number (10) is the number of solutions to try, in this case ten. 45 Eugenia has a known period of 5.699 hours. If you wanted to go hunting for a solution, you could enter below the numbers: 4 10.00000 .01000 10 10.00 5.65 This would generate a set of 10 fourth-order solutions with periods from 5.65 to 5.74 hours. Note any number not re-set remains at the default or previous value, as given on the first line. The last number, 10.00 in the example above, is the full-scale range of a display line plot which will appear on the screen, in units of "sigmas", the formal a priori estimated accuracy of the solution. If all sigmas were 0.01, for example, ten sigmas would be 0.1, so the plot scale would go up to an RMS dispersion of the data from the fit curve of 0.1 mag. Here's the screen output from the above run: -------------------------------------------------- 4 10.00000 .01000 10 10.00 5.65 4 5.65000 5.83351 + .0112501 .0116519 1 of 10 4 5.66000 5.32760 + .0102744 .0106414 2 of 10 4 5.67000 4.22078 + .0081399 .0084306 3 of 10 4 5.68000 2.86324 + .0055218 .0057190 4 of 10 4 5.69000 1.57488 + .0030372 .0031457 5 of 10 4 5.70000 1.19099 + .0022969 .0023789 6 of 10 4 5.71000 2.09459 + .0040395 .0041837 7 of 10 4 5.72000 3.12527 + .0060272 .0062424 8 of 10 4 5.73000 3.85284 + .0074303 .0076957 9 of 10 4 5.74000 4.01788 + .0077486 .0080253 10 of 10 NLC = 4 NTOT = 26 Enter fit order, period, scan increment, scan length, and plot scale below current values (No entry to leave unchanged, negative fit order for period scan, letter entry to display solution): 4 5.75000 .01000 10 10.00 ---------------------------------------------------- Sure enough, looks like 5.70 hours is a pretty good "guess". Each line of output includes the fit harmonic order, the value of period, the RMS dispersion in units of the a priori estimated uncertainty (thus, 1.0 means the fit is exactly as good as you estimated the noise level to be), the plot of dispersion, two columns of fit uncertainty in units of magnitude, and finally the fit number, "N of M requested". The first of the two columns of fit uncertainty is the formal uncertainty of the fitted curve, that is, the RMS fit dispersion divided by sqrt[(N)(N-K)], where N is the number of data points, and K is the number of solution constants. The second column is the same, except divided by sqrt[(N)(N-K-1)]. The difference between the values in the two columns is a formal measure of significance of changing the solution. Thus if I force the solution off from the minimum value of dispersion by an amount that raises the dispersion in the first column to be equal to the value in the second column at minimum, then I am "one sigma" off of the least squares solution. Let's look at a more detailed scan of period to see how this works: ---------------------------------------------------------- 4 5.75000 .01000 10 10.00 5.695 .001 4 5.69500 1.17296 + .0022621 .0023429 1 of 10 4 5.69600 1.13814 + .0021949 .0022733 2 of 10 4 5.69700 1.12274 + .0021652 .0022426 3 of 10 4 5.69800 1.12694 + .0021733 .0022509 4 of 10 4 5.69900 1.15019 + .0022182 .0022974 5 of 10 4 5.70000 1.19103 + .0022969 .0023790 6 of 10 4 5.70100 1.24719 + .0024052 .0024911 7 of 10 4 5.70200 1.31614 + .0025382 .0026289 8 of 10 4 5.70300 1.39559 + .0026914 .0027875 9 of 10 4 5.70400 1.48317 + .0028603 .0029625 10 of 10 ----------------------------------------------------------- The minimum dispersion is about .002165, for a period of 5.697 hours. The number in the second column is .022426, which is about equal to the value in the first column for periods of 5.695 hours, and 5.699 hours. These values are +/-0.002 hours from the best fit solution, so we infer that the formal uncertainty of the period determined is +/-0.002 hours. The other example file included on the program disk, 45all.82, has one additional day of data, from more than a month earlier. You can run the analysis of that file to obtain a much more refined value of the period. You should get 5.6988 +/-0.0003 hours, using fit order NFIT = 5. Note that the refined value differs from the short-arc solution by almost one sigma of the short-arc solution, verifying that our error estimate was about right. The next scan that should be performed is in harmonic order. Select the best fit period, and specify a NEGATIVE value for the scan increment. This forces the program to increment harmonic order and keep period constant. I usually scan from NFIT = 2 to about as high as the number of data points allow (but not over around 12). As noted, with only 26 points, NFIT = 8 is probably already too much, but an OK range to examine: ----------------------------------------------------------- 4 5.70500 .00100 10 10.00 2 5.698 -1 7 2 5.69800 1.43848 + .0027741 .0028512 1 of 7 3 5.69800 1.27744 + .0024636 .0025406 2 of 7 4 5.69800 1.12694 + .0021733 .0022509 3 of 7 5 5.69800 .88977 + .0017159 .0017874 4 of 7 6 5.69800 .88354 + .0017039 .0017891 5 of 7 7 5.69800 .89352 + .0017232 .0018309 6 of 7 8 5.69800 .92002 + .0017743 .0019221 7 of 7 ------------------------------------------------------------- Formally, NFIT = 5 is "much better" than 4, but one should be cautious here because the dispersion falls under 1.0, that is, it is better than the formal noise in the data. Notice that increasing the order of fit fails to improve things beyond about 5, and in fact makes it much worse for NFIT = 8. This is because of the factor sqrt[(N)(N-K)] in the denominator, as K increases with added solution parameters. To decide if an additional harmonic is formally significant, we can use the two columns of fit uncertainties again, but this time keep in mind that each harmonic order intro- duces TWO new solution parameters, so the fit uncertainty has to improve by twice the difference in the two columns to be significant. Between order 4 and 5, the fit uncertainty did indeed improve by more than twice the difference in the two columns, so formally the fifth harmonic is significant, but with the above word of caution. Order 6 is "better", but not enough to be formally significant: the dispersion of the order 6 solution is 0.0017039. The dispersion in the second column is larger by 0.0000852, so the order six solution would have to be better than the order 5 solution by 2 x .0000852 = .0001704 in order to be significant. So in this case, the best solution order is 5. One last refinement: we can do another period scan for NFIT = 5, to really home in on the period value: -------------------------------------------------------------- 9 5.69800 .00000 7 10.00 5 5.695 .0005 12 5 5.69500 1.00145 + .0019313 .0020118 1 of 12 5 5.69550 .96733 + .0018655 .0019432 2 of 12 5 5.69600 .93880 + .0018105 .0018859 3 of 12 5 5.69650 .91620 + .0017669 .0018405 4 of 12 5 5.69700 .90038 + .0017364 .0018087 5 of 12 5 5.69750 .89145 + .0017192 .0017908 6 of 12 5 5.69800 .88977 + .0017159 .0017874 7 of 12 5 5.69850 .89538 + .0017268 .0017987 8 of 12 5 5.69900 .90828 + .0017516 .0018246 9 of 12 5 5.69950 .92798 + .0017896 .0018642 10 of 12 5 5.70000 .95420 + .0018402 .0019169 11 of 12 5 5.70050 .98625 + .0019020 .0019812 12 of 12 --------------------------------------------------------------- The solution period has not changed much, interpolating a bit maybe 5.6978, and the error bar may be about +/-0.0017. Having now arrived at a "best" solution, set up a single solution with those parameters and generate an output file as follows: -------------------------------------------------------------- 5 5.70100 .00050 12 10.00 5.698 1 <--input 5 5.69800 .88977 + .0017159 .0017874 1 of NLC = 4 NTOT = 26 Enter fit order, period, scan increment, scan length, and plot scale below current values (No entry to leave unchanged, negative fit order for period scan, letter entry to display solution): 5 5.69850 .00050 1 10.00 p <---display solution NLC = 4 NTOT = 26 NFIT = 5 T0 = 45087.5 P = 5.69800 DSP = .88977 PE = .0017159 8.3902 8.3975 8.4349 8.4447 .0250 .0014 -.0721 .0004 .0103 .0013 .0021 -.0026 .0006 .0127 Enter CR to continue, N for new input file, X to stop, or file name to write output files: 45tmo <-- generate output file NLC = 4 NTOT = 26 NFIT = 5 T0 = 45087.5 P = 5.698000 DSP = .88977 PE = .0017159 F 8.3902 8.3975 8.4349 8.4447 .0250 .0014 -.0721 .0004 .0103 F .0013 .0021 -.0026 .0006 .0127 2.4677 .0733 .014 45089.73957 8.518 8.522 -.004 -.3002 .433 3.0007 -.0387 .019 45089.76178 8.406 8.400 .006 .3150 .527 4.0702 -.0617 .012 45089.80634 8.383 8.382 .001 .0583 .714 Phase = 19.83 Mean Mag = 8.445 DSP = .45696 PE = .0037541 APR3082.EXT 5.0074 .0381 .009 45088.65831 8.473 8.475 -.002 -.1987 .879 5.4682 .0261 .008 45088.67751 8.461 8.459 .002 .2140 .960 5.6581 .0151 .009 45088.68542 8.450 8.451 -.001 -.1417 .993 .1583 -.0119 .009 45088.69368 8.423 8.431 -.008 -.8479 .028 .2968 -.0149 .008 45088.69945 8.420 8.411 .009 1.1121 .052 .5994 -.0519 .011 45088.71206 8.383 8.381 .002 .1684 .105 .9282 -.0499 .013 45088.72576 8.385 8.396 -.011 -.8835 .163 1.2513 .0081 .009 45088.73922 8.443 8.431 .012 1.3857 .220 1.5801 .0201 .008 45088.75292 8.455 8.465 -.010-1.2389 .277 1.8964 .0881 .011 45088.76610 8.523 8.515 .008 .7120 .333 2.2127 .1081 .009 45088.77928 8.543 8.544 -.001 -.1232 .388 2.5348 .0461 .018 45088.79270 8.481 8.497 -.016 -.8734 .445 2.9051 -.0279 .008 45088.80813 8.407 8.406 .001 .1360 .510 Phase = 19.62 Mean Mag = 8.435 DSP = 1.17362 PE = .0030327 APR2982.EXT 1.1832 -.0205 .011 45084.70030 8.377 8.386 -.009 -.8564 .208 1.7626 .0585 .007 45084.72444 8.456 8.455 .001 .1255 .309 2.4926 .0765 .009 45084.75486 8.474 8.469 .005 .5061 .437 3.2280 -.0665 .008 45084.78550 8.331 8.331 .000 .0292 .567 Phase = 18.73 Mean Mag = 8.398 DSP = .85244 PE = .0035817 APR2582.EXT 5.1176 .0368 .013 45083.67715 8.427 8.424 .003 .2452 .898 .3338 -.0342 .011 45083.71524 8.356 8.361 -.005 -.4766 .059 1.3790 .0128 .009 45083.75879 8.403 8.398 .005 .5514 .242 1.7222 .0498 .013 45083.77309 8.440 8.441 -.001 -.0879 .302 3.0203 -.0502 .011 45083.82718 8.340 8.343 -.003 -.2603 .530 3.5829 -.0872 .012 45083.85062 8.303 8.304 -.001 -.0787 .629 Phase = 18.49 Mean Mag = 8.390 DSP = .54003 PE = .0024738 APR2482.EXT -------------------------------------------------------- In the output above, the solution parameters (lines starting with F in the output file) consist of the NLC zero-level magnitudes (zeroth harmonic), followed by the sine and cosine coefficients of the harmonics. Note that the second harmonic is usually the largest amplitude, as it is in this case. the output file that is generated has the extension .cmp, in the above example, 45tmo.cmp. A second file is generated with the extension .faz (45tmo.faz), which is the phase relation file. It's short, so here it is entirely: ------------------------------------------- 19.83 8.445 .004 APR3082.EXT 19.62 8.435 .003 APR2982.EXT 18.73 8.398 .004 APR2582.EXT 18.49 8.390 .002 APR2482.EXT ------------------------------------------- This file is in the form to be used directly as input to the program FAZ, described separately. The first column is phase angle, the second one is the mean absolute magnitude at that phase angle [V(alpha)], and the third is the formal uncertainty from the fit. This concludes the basic fitting procedure. There are a few other items to be mentioned. The first is the choice of scan intervals to try in period searching. The object is to sample periods often enough that successive solutions will not differ by a lot, that is, you won't skip over a minimum in the dispersion function without noticing it. I find that minima show up without wasting time oversampling if you choose a period increment such that successive period choices differ by less than about 1/10 of a cycle over the entire data span. Thus, delta-P = 0.1*(P**2)/T, where T is the time span of the data. In the above example, T = 125 hours and P =5.7 hours, so delta-P = 0.1*(5.7**2)/125 = 0.026 hours. If your computer is slow and your data set is large, you can get away with this large of a step size. To be safe, I like to stay a bit lower, hence the 0.01 value I started with in the above example analysis. What if you have no idea what the period may be? Another feature of the program allows you to scan a really large range of P efficiently. Since the period step size scales as inverse period squared, it is efficient to increase the step size as the trial period gets longer. If you enter a negative fit order, this tells the program that you will be scanning over a large range of P, so the period step size will be increased proportional to 1/P**2 as it goes along. A look at the individual lightcurves of 45 would reveal that the period could hardly be less than 3 hours. A step size of about 0.005 hours would be safe at such a short period, so the input line and some output looks like this: ----------------------------------------------------- 5 5.69850 .00050 1 10.00 -4 3. .005 400 4 3.00000 5.21578 + .0100588 .0104180 1 of 400 4 3.00502 4.39874 + .0084831 .0087860 2 of 400 4 3.01005 4.58568 + .0088436 .0091594 3 of 400 4 3.01510 4.51051 + .0086986 .0090093 4 of 400 4 3.02017 4.90058 + .0094509 .0097884 5 of 400 4 3.02525 5.33256 + .0102840 .0106513 6 of 400 4 3.03035 5.29519 + .0102119 .0105766 7 of 400 4 3.03547 5.61474 + .0108282 .0112149 8 of 400 4 3.04060 5.57318 + .0107480 .0111319 9 of 400 4 3.04576 5.79316 + .0111723 .0115713 10 of 400 ------------------------------------------------------ 4 5.60196 5.21003 + .0100477 .0104065 298 of 400 4 5.61692 6.09718 + .0117586 .0121785 299 of 400 4 5.63194 6.46938 + .0124764 .0129219 300 of 400 4 5.64703 5.97467 + .0115223 .0119338 301 of 400 4 5.66218 5.15119 + .0099342 .0102890 302 of 400 4 5.67740 3.21907 + .0062081 .0064298 303 of 400 4 5.69269 1.31860 + .0025430 .0026338 304 of 400 4 5.70804 1.88679 + .0036387 .0037687 305 of 400 4 5.72347 3.43360 + .0066218 .0068583 306 of 400 4 5.73896 4.01585 + .0077447 .0080213 307 of 400 4 5.75452 4.34384 + .0083772 .0086764 308 of 400 4 5.77014 4.84967 + .0093527 .0096867 309 of 400 ------------------------------------------------------ The second sequence is a bit farther down the output, showing the correct period and that it hasn't been missed. Note by this point the scan increments are much larger than at the beginning. One final feature of the program. As it goes along, it generates a log file, which has the extension .scn ("scan"). this file consists of just two columns, the period and the sigma value of the dispersion. An example from the above run: ----------------------------------------------------- 5.60196 5.21003 5.61692 6.09718 5.63194 6.46938 5.64703 5.97467 5.66218 5.15119 5.67740 3.21907 5.69269 1.31860 5.70804 1.88679 5.72347 3.43360 5.73896 4.01585 5.75452 4.34384 5.77014 4.84967 ----------------------------------------------------- This file can be plotted to produce a "noise spectrum" of the fit as a function of period, and displays all possible period solutions, which is sometimes convenient when the fit is ambiguous. As before, the two right had columns of dispersions (on the screen display) can be useful for evaluating the formal quality of competing solutions, but this must be used judiciously. Thus a period solution twice as long as the correct value will generally be formally better, because there is less data overlap, but will be quadrupally periodic, which is not very plausible. At some level, experience, judgement and even artistry enter into the fitting procedure if the data quality is marginal or the situation is complex, as in "tumbling" asteroids.