Re: [Lazarus] Strange problem compiling IDE

2017-03-27 Thread Werner Pamler via Lazarus

Am 27.03.2017 um 21:37 schrieb C Western via Lazarus:

But nevertheless, as mentioned in the beginning, I don't think that
these files are the reason of the compilation issues that you mention.
If that were true then anybody would have them - at least I don't.


I can't claim to be an expert on the way the search path works, but 
the issue might only surface when spe is used in a component which is 
compiled as part of the IDE. Can I suggest a short term fix is to 
rename the local copies of the units (spe -> spe_fixed)? This will 
definitely remove the clash.


I understand. A package using the original spe and TAChart using a 
modified spe with the same name cannot coexist. You had not written this 
in the previous posts.


At the moment TAChart has these modified numlib files in its 
numlib_fixes folder: ipf (fitting, splines), mdt, sle. I am writing a 
patch to remove the hard-coded array length in the numlib unit typ which 
lead to the addition of ipf to TAChart. But this will only be a longterm 
solution because fpc has a slow version cycle time and Lazarus wants to 
support older fpc versions as well.


Luckily, no other units of numlib depend on ipf, and therefore, I could 
rename the modified ipf to ipf_fix. mdt and sle seem to contain minor 
modifications, a diff shows me that they were modified only to silence 
the compiler, and so I deleted them. In total, now TAChart does no 
longer contain any modified equally named units from other packages.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Strange problem compiling IDE

2017-03-27 Thread C Western via Lazarus


But nevertheless, as mentioned in the beginning, I don't think that
these files are the reason of the compilation issues that you mention.
If that were true then anybody would have them - at least I don't.


I can't claim to be an expert on the way the search path works, but the 
issue might only surface when spe is used in a component which is 
compiled as part of the IDE. Can I suggest a short term fix is to rename 
the local copies of the units (spe -> spe_fixed)? This will definitely 
remove the clash.


Colin
--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] TSplitter refuses to move to the left with mouse (but moves via code)

2017-03-27 Thread Jürgen Hestermann via Lazarus

Am 2017-03-20 um 18:04 schrieb Jürgen Hestermann via Lazarus:
> I have a TSplitter that separates elements on the left from those on the 
right.
> 
> But when I try to move the splitter with the mouse,
> only moving to the right works okay.
> Moving to the left only moves the splitter 1 pixel (I think).
> Moving it again moves it another 1 pixel and so on.
> But I cannot move it multiple pixels in one step (as I can when
> moving the right).
> Is this a bug?

It seems that this is a bug.

When I look at TCustomSplitter.MoveSplitter
there is a loop over all anchored controls
that calculates a value CurMaxShrink.
It seems that this routine only considers to "shrink"
these anchored controls (when the splitter moves) but
it is not taken into account to "move" these controls.
That's a bug IMO.
If a control cannot be shrinked but can be moved
then this should work too.

I have anchored some controls to the left
which themselves have no anchor to their left side
but their size is determined via AutoSize.
They are "right aligned" so to speak.
If I move the splitter to the left via code it works as expected
and the anchored controls on the left are moved too.
But when using the mouse then TCustomSplitter.MoveSplitter
restricts the movement of the splitter to the
"shrinkability" of the anchored controls only.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Zeos SQLite Linux

2017-03-27 Thread Luca Olivetti via Lazarus

El 27/03/17 a les 12:07, Michael Van Canneyt via Lazarus ha escrit:



On Mon, 27 Mar 2017, Michael Schnell via Lazarus wrote:


I'd like to check out working with Zeos and SQLite.

I have an "SVN" installation of Lazarus on Linux. So i used same for
testing.

I found a demo program and when trying this I found it includes the line
sLibraryLocation := sAppPath + 'sqlite3_library.dll';
So it obviously is doe for Windows (On Linux it just shows an
appropriate Error message.)

So the question is to do Zeos / SQLite / Linux ? I failed to find such
information yet.


You should ask this on a ZeOS mailing lists/forum. We don't maintain zeos.
Based on the above, I would think that the zeOS SQLITE driver doesn't
support Linux.


Well, I have a project using zeos+sqlite under linux since 2007 and it's 
working just fine.
I recently modified it (my project, not zeos) so I can confirm it still 
works.

I'm using zeos 7.1.4 (in 2007 I think I was using 7.0.3).


Bye
--
Luca Olivetti
Wetron Automation Technology http://www.wetron.es/
Tel. +34 93 5883004 (Ext.3010)  Fax +34 93 5883007
--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread LacaK via Lazarus


But now another issue is coming up: If I increase the number of 
records to 40,000 population of records slows down after about 
10,000 records, speeds up again, and comes to an apparant 
stand-still for 32900 records. After waiting some time the record 
counter (which is incremented in steps of 100 in my demo) goes up to 
33000. Then I gave up.

Try call MergeChangeLog regulary on every 1000 rows for example.
If does not help, attach your test program, so we can reproduce ...


Yes, this is the solution. Thank you. MergeChangeLog definitely should 
be documented in a better way.

You can report bug report about it or add it to wiki yourself
 - There is http://wiki.freepascal.org/TBufDataset but it seems that 
there is only one line of text ;-)
 - in FCL documentation 
http://www.freepascal.org/docs-html/current/fcl/db/index.html I can not 
find TBufDataset at all

L.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Zeos SQLite Linux

2017-03-27 Thread Michael Van Canneyt via Lazarus



On Mon, 27 Mar 2017, Michael Schnell via Lazarus wrote:


I'd like to check out working with Zeos and SQLite.

I have an "SVN" installation of Lazarus on Linux. So i used same for 
testing.


I found a demo program and when trying this I found it includes the line
sLibraryLocation := sAppPath + 'sqlite3_library.dll';
So it obviously is doe for Windows (On Linux it just shows an 
appropriate Error message.)


So the question is to do Zeos / SQLite / Linux ? I failed to find such 
information yet.


You should ask this on a ZeOS mailing lists/forum. We don't maintain zeos.
Based on the above, I would think that the zeOS SQLITE driver doesn't 
support Linux.


Michael.
--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


[Lazarus] Zeos SQLite Linux

2017-03-27 Thread Michael Schnell via Lazarus

I'd like to check out working with Zeos and SQLite.

I have an "SVN" installation of Lazarus on Linux. So i used same for 
testing.


I found a demo program and when trying this I found it includes the line
sLibraryLocation := sAppPath + 'sqlite3_library.dll';
So it obviously is doe for Windows (On Linux it just shows an 
appropriate Error message.)


So the question is to do Zeos / SQLite / Linux ? I failed to find such 
information yet.


Thanks,

-Michael

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread Werner Pamler via Lazarus

Am 27.03.2017 um 10:59 schrieb LacaK via Lazarus:
But now another issue is coming up: If I increase the number of 
records to 40,000 population of records slows down after about 10,000 
records, speeds up again, and comes to an apparant stand-still for 
32900 records. After waiting some time the record counter (which is 
incremented in steps of 100 in my demo) goes up to 33000. Then I gave 
up.

Try call MergeChangeLog regulary on every 1000 rows for example.
If does not help, attach your test program, so we can reproduce ...


Yes, this is the solution. Thank you. MergeChangeLog definitely should 
be documented in a better way.


--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Strange problem compiling IDE

2017-03-27 Thread Werner Pamler via Lazarus

Am 27.03.2017 um 10:29 schrieb C Western via Lazarus:

On 26/03/17 23:13, Martin Frb via Lazarus wrote:

On 26/03/2017 22:11, C Western via Lazarus wrote:

I am having a strange problem when compiling the IDE (current svn for
both IDE and FPC). The compilation stops with



Warning: Recompiling Expr, checksum changed for spe {impl}

"changed for spe"
either spe got recompiled, or you have 2 different spe.ppu

This happens for example when you
- have 2 spe.pas
- when you have search path for units that overlap (one spe.pas, but
visible in the search path of 2 packages)


Expr.pas(78,12) Fatal: Can't find unit Expr used by FormGrid


It turns out the problem was indeed a duplicate spe.pas; the culprit was

components/tachart/numlib_fix/spe.pas

simply deleting this file allows the IDE to compile. (I don't use the 
tachart package). This is very difficult to figure out looking at the 
error messages as described in my earlier message; looking back very 
carefully on the -vt output I can see that there is a ppu loading 
message on the tachart unit, but then the compiler keeps on looking 
for spe.pas, with no indication at that point that the .ppu has been 
rejected. The volume of output doesn't help.


I do a lot of recompilations of the IDE, I am a heavy user of TAChart, 
and I also have programs in which spe is used: I have never seen this 
issue. I can't remember why Alexander had to add this local copy of the 
numlib file, it certainly was a workaround for some issue. If I compare 
the spe of fpc3.02 with that local copy I don't see any essential 
differences. Therefore, I removed spe from the numlib_fixes folder of 
TAChart today.


But: in numlib_fixes there are other local copies of numlib files. They 
contain workaround fixes of issues with the original files. The fixes 
were made by Alexander, and I don't know which issues they address. At 
the moment I am working with numlib and have some patches in the bug 
tracker, but progress is slow since I don't which write permission 
there. But I can promise to try to remove the duplicated numlib files 
from TAChart in the long run.


But nevertheless, as mentioned in the beginning, I don't think that 
these files are the reason of the compilation issues that you mention. 
If that were true then anybody would have them - at least I don't.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread LacaK via Lazarus



Try call FExportDataset.MergeChangeLog before:
  WriteLn('Saving...'); 

Does anything in your timing changed ?


Ah - that's it. TBufDataset saves the records instantly now. Probably, 
this should go into the official wiki site for TBufDataset.


But now another issue is coming up: If I increase the number of 
records to 40,000 population of records slows down after about 10,000 
records, speeds up again, and comes to an apparant stand-still for 
32900 records. After waiting some time the record counter (which is 
incremented in steps of 100 in my demo) goes up to 33000. Then I gave up.

Try call MergeChangeLog regulary on every 1000 rows for example.
If does not help, attach your test program, so we can reproduce ...

L.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread Martin Schreiber via Lazarus
On Sunday 26 March 2017 23:53:08 Werner Pamler via Lazarus wrote:
> Trying to extend the import/export example of fpspreadsheet from a dBase
> table to a TBufDataset I came across this issue with TBufDataset: While
> data are posted to the database as quickly as usual writing to file
> takes extremely long if there are more than a few thousand records.
>
> Run the demo attached below. On my system, I measure these (non-linearly
> scaling) execution times for writing the TBufDataset table to file:
>
> 1000 records -- 0.9 seconds
> 2000 records -- 8.8 seconds
> 3000 records -- 31.1 seconds
> etc.
>
> Compared to that, writing of the same data to a dbf file is a wink of an
> eye. Is there anything which I am doing wrong? Or should I report a bug?
>
Can you switch off 'applyupdate'-functionality in TBufdataset? MSEgui 
TLocalDataset (a fork of FPC TBufDataset) writes 1'000'000 records in about 
0.4 seconds if options bdo_noapply is set.
" 
100: 0.313s
100: 0.308s
100: 0.319s
100: 0.311s
100: 0.411s
100: 0.293s
100: 0.327s
100: 0.321s
3000: 0.001s
3000: 0.001s
3000: 0.001s
"
"
procedure tmainfo.recev(const sender: TObject);
var
 i1: int32;
 t1: tdatetime;
begin
 locds.active:= false;
 locds.disablecontrols();
 try
  locds.active:= true;
  for i1:= 1 to reccount.value do begin
   locds.appendrecord([i1,inttostrmse(i1)+'abcdefghiklmnop',10*i1]);
  end;
  t1:= nowutc();
  locds.savetofile('test.db');
  t1:= nowutc()-t1;
  writeln(reccount.value,': ',formatfloatmse(t1*60*60*24,'0.000s'));
  locds.active:= false;
 finally
  locds.enablecontrols();
 end;
end;
"

Martin
-- 
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


[Lazarus] TLMMouseEvent axis

2017-03-27 Thread Alexey via Lazarus

Hi

It is needed (for Carbon with mac trackpad) to use Axis field (1 bit is 
okay: vertical=0; horiz=1) in this record.


Pls add it? Maybe add at end.

  PLMMouseEvent = ^TLMMouseEvent;
  TLMMouseEvent = record
Msg: Cardinal;
{$ifdef cpu64}
UnusedMsg: Cardinal;
{$endif}
{$IFDEF FPC_LITTLE_ENDIAN}
Button: Word; // 1=left, 2=right, 3=middle
WheelDelta: SmallInt; // -1 for up, 1 for down
{$ELSE}
WheelDelta: SmallInt; // -1 for up, 1 for down
Button: Word; // 1=left, 2=right, 3=middle
{$ENDIF}
{$ifdef cpu64}
Unused1 : Longint;
{$endif cpu64}
X: Smallint;  // under gtk this is longint
Y: Smallint;  // ditto
{$ifdef cpu64}
Unused2 : Longint;
{$endif cpu64}
Result: LRESULT;  // to fit std message size
UserData: pointer;// used under gtk
State: TShiftState;   // in win is the equivalent of button
  end;



--
Regards,
Alexey

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread Werner Pamler via Lazarus

Am 27.03.2017 um 10:13 schrieb LacaK via Lazarus:

Try call FExportDataset.MergeChangeLog before:
  WriteLn('Saving...'); 

Does anything in your timing changed ?


Ah - that's it. TBufDataset saves the records instantly now. Probably, 
this should go into the official wiki site for TBufDataset.


But now another issue is coming up: If I increase the number of records 
to 40,000 population of records slows down after about 10,000 records, 
speeds up again, and comes to an apparant stand-still for 32900 records. 
After waiting some time the record counter (which is incremented in 
steps of 100 in my demo) goes up to 33000. Then I gave up.


Again, If I run the demo with TMemDataset, these effects do not show up. 
(As for the current code, see my other post of today).

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Strange problem compiling IDE

2017-03-27 Thread C Western via Lazarus

On 26/03/17 23:13, Martin Frb via Lazarus wrote:

On 26/03/2017 22:11, C Western via Lazarus wrote:

I am having a strange problem when compiling the IDE (current svn for
both IDE and FPC). The compilation stops with



Warning: Recompiling Expr, checksum changed for spe {impl}

"changed for spe"
either spe got recompiled, or you have 2 different spe.ppu

This happens for example when you
- have 2 spe.pas
- when you have search path for units that overlap (one spe.pas, but
visible in the search path of 2 packages)


Expr.pas(78,12) Fatal: Can't find unit Expr used by FormGrid





It turns out the problem was indeed a duplicate spe.pas; the culprit was

components/tachart/numlib_fix/spe.pas

simply deleting this file allows the IDE to compile. (I don't use the 
tachart package). This is very difficult to figure out looking at the 
error messages as described in my earlier message; looking back very 
carefully on the -vt output I can see that there is a ppu loading 
message on the tachart unit, but then the compiler keeps on looking for 
spe.pas, with no indication at that point that the .ppu has been 
rejected. The volume of output doesn't help.


Colni
--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread LacaK via Lazarus

Try call FExportDataset.MergeChangeLog before:
  WriteLn('Saving...'); 

Does anything in your timing changed ?
-Laco.
--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread Werner Pamler via Lazarus

Am 27.03.2017 um 00:53 schrieb Howard Page-Clark via Lazarus:

I can get small performance increases by
- avoiding FieldByName() calls and using AppendRecord
No, at least not for the issue I am referring to. Like in the answer to 
Marc's comment: This happens while the table is populated, but the delay 
occurs when the populated table is written to stream/file.



- using SaveToFile and avoiding an intermediate memory stream
Has no noticable effect. 1000 records really is not much, and flushing 
the memory stream to disk occurs without any delay (see my code haveing 
measurement points before and after writing the memory stream and after 
writing to file. In fact, when I noticed this effect I did not have any 
explicit writing code at all, I noticed an excessive delay while the 
dataset is closed -- this is when the BufDataset is saved automatically.



- increasing the value of PacketRecords
Not knowing what this is I increased the value in multiples of 10 from 1 
to 1E9 and don't see any effect within the usual scatter.


Clearly either the insertion algorithm should be improved, or the 
buffering, or the way the buffered records are written to disk. Maybe 
all three areas of TBufDataset can be optimised for better performance.
Thanks. When I have time I'll write a bug report. The current 
TBufDataset is usable only as a pure in-memory table which is never 
written to file. BTW, in the attached modified demo code the TBufDataset 
can be replaced by a TMemDataset (define "USE_MEM_DATASET"), and this 
one is written instantly.


-- snip ---

program project1;

{$mode objfpc}{$H+}

{$DEFINE USE_MEM_DATASET}

uses
  SysUtils, classes, db, memds, bufdataset;

const
  TABLENAME = 'people'; //name for the database table, extension will 
be added

  DATADIR = 'data'; //subdirectory where database is stored

const
  NUM_RECORDS = 5000;
  SECONDS_PER_DAY = 24 * 60 * 60;

var
  FExportDataset: TDataset;

procedure CreateDatabase;
var
  i: Integer;
  fn: String;
  stream: TMemoryStream;
  t: TDateTime;
begin
  ForceDirectories(DATADIR);

  fn := DATADIR + DirectorySeparator + TABLENAME + '.db';
  DeleteFile(fn);

  {$IFDEF USE_MEM_DATASET}
  FExportDataset := TMemDataset.Create(nil);
  {$ELSE}
  FExportDataset := TBufDataset.Create(nil);
  {$ENDIF}

  FExportDataset.FieldDefs.Add('Last name', ftString, 15);
  FExportDataset.FieldDefs.Add('First name', ftString, 10);
  FExportDataset.FieldDefs.Add('City', ftString, 15);
  FExportDataset.FieldDefs.Add('Birthday', ftDate);
  FExportDataset.FieldDefs.Add('Salary', ftCurrency);
  FExportDataset.FieldDefs.Add('Work begin', ftDateTime);
  FExportDataset.FieldDefs.Add('Work end', ftDateTime);
  FExportDataset.FieldDefs.Add('Size', ftFloat);
  {$IFNDEF USE_MEM_DATASET}
  TBufDataset(FExportDataset).CreateDataset;
  {$ENDIF}

  FExportDataset.Open;

  // Random data
  for i:=1 to NUM_RECORDS do begin
if (i mod 100 = 0) then
  WriteLn(Format('Adding record %d...', [i]));
FExportDataset.Insert;
FExportDataset.FieldByName('Last name').AsString := 'A';
FExportDataset.FieldByName('First name').AsString := 'B';
FExportDataset.FieldByName('City').AsString := 'C';
FExportDataset.FieldByName('Birthday').AsDateTime := 0;
FExportDataset.FieldByName('Salary').AsFloat := 0;
FExportDataset.FieldByName('Size').AsFloat := 0;
FExportDataSet.FieldByName('Work begin').AsDateTime := 0;
FExportDataSet.FieldByName('Work end').AsDateTime := 0;
FExportDataset.Post;
  end;

  WriteLn('Saving...');
  t := now;
  stream := TMemoryStream.Create;
  try
{$IFDEF USE_MEM_DATASET}
TMemDataset(FExportDataset).SaveToStream(stream);
{$ELSE}
TBufDataset(FExportDataset).SaveToStream(stream);
{$ENDIF}
stream.Position := 0;
WriteLn('Written to memory stream: ', FormatFloat('0.000 s', (now - 
t) * SECONDS_PER_DAY));

stream.SaveToFile(fn);
  finally
stream.Free;
  end;
  Writeln('Done. Total time needed for saving: ', FormatFloat('0.000 
s', (now - t) * SECONDS_PER_DAY));


  FExportDataset.Close;

  writeLn(Format('Created file "%s" in folder "data".', [
ExtractFileName(fn), ExtractFileDir(fn)
  ]));
  FExportDataset.Free;
end;

begin
  CreateDatabase;

  WriteLn;
  WriteLn('Press ENTER to close.');
  ReadLn;
end.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread Werner Pamler via Lazarus


Am 27.03.2017 um 09:07 schrieb Marc Santhoff via Lazarus:

I didn't count, but you make extensive use of the Random() function.
Could that be the cause of slowness?


No, Random() is called only while records are populated - this step is 
completed without any noticable delay. Time is measured afterwards when 
the populated table is written to stream/file.

--
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus


Re: [Lazarus] Writing >1000 TBufDataset records to file is extremely slow

2017-03-27 Thread Marc Santhoff via Lazarus
On So, 2017-03-26 at 23:53 +0200, Werner Pamler via Lazarus wrote:
> Trying to extend the import/export example of fpspreadsheet from a dBase 
> table to a TBufDataset I came across this issue with TBufDataset: While 
> data are posted to the database as quickly as usual writing to file 
> takes extremely long if there are more than a few thousand records.
> 
> Run the demo attached below. On my system, I measure these (non-linearly 
> scaling) execution times for writing the TBufDataset table to file:
> 
> 1000 records -- 0.9 seconds
> 2000 records -- 8.8 seconds
> 3000 records -- 31.1 seconds
> etc.
> 
> Compared to that, writing of the same data to a dbf file is a wink of an 
> eye. Is there anything which I am doing wrong? Or should I report a bug?
> 

I didn't count, but you make extensive use of the Random() function.
Could that be the cause of slowness?

HTH,
Marc

[...]
>FExportDataset.Open;
> 
>// Random data
>for i:=1 to NUM_RECORDS do begin
>  if (i mod 100 = 0) then
>WriteLn(Format('Adding record %d...', [i]));
>  FExportDataset.Insert;
>  FExportDataset.FieldByName('Last name').AsString := 
> LAST_NAMES[Random(NUM_LAST_NAMES)];
>  FExportDataset.FieldByName('First name').AsString := 
> FIRST_NAMES[Random(NUM_FIRST_NAMES)];
>  FExportDataset.FieldByName('City').AsString := 
> CITIES[Random(NUM_CITIES)];
>  FExportDataset.FieldByName('Birthday').AsDateTime := startDate - 
> random(maxAge);
>  FExportDataset.FieldByName('Salary').AsFloat := 1000+Random(9000);
>  FExportDataset.FieldByName('Size').AsFloat := (160 + Random(50)) / 100;
>  FExportDataSet.FieldByName('Work begin').AsDateTime := 
> 4+EncodeTime(6+Random(4), Random(60), Random(60), 0);
>  FExportDataSet.FieldByName('Work end').AsDateTime := 
> EncodeTime(15+Random(4), Random(60), Random(60), 0);
>  FExportDataset.Post;
>end;


-- 
___
Lazarus mailing list
Lazarus@lists.lazarus-ide.org
http://lists.lazarus-ide.org/listinfo/lazarus