Re: [PERFORM] rotate records

2006-02-27 Thread Bruno Wolff III
On Tue, Feb 28, 2006 at 09:14:59 +0530,
  "Jeevanandam, Kathirvel (IE10)" <[EMAIL PROTECTED]> wrote:
> Hi all,

Please don't hijack existing threads to start new ones. This can cause
people to miss your question and messes up the archives.

Performance questions should generally be posted to the performance list.
I have redirected followups to there.

> 
> I am facing performance issues even with less than 3000 records, I am
> using Triggers/SPs in all the tables. What could be the problem.
> Any idea it is good to use triggers w.r.t performance?

A common cause of this kind of thing is not running vacuum often enough
leaving you with a lot of dead tuples.

You should probably start by doing a vacuum full analyse and then showing
the list some problem query sources along with explain analyse output
for them.

> 
> Regards,
> Jeeva.K
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] wal sync method

2006-02-27 Thread Bruce Momjian

Use whichever sync method is fastest for you.  They are all reliable,
except turning fsync off.

---

Javier Somoza wrote:
> 
> 
> 
> Hi Evgeny
> 
> Im also testing what fsync method to use and using this program
> (http://archives.postgresql.org/pgsql-performance/2003-12/msg00191.php)
> a bit modified and i get this results:
> 
> write  0.36
> write & fsync 0.006796
> write & fdatasync  0.001001
> write (O_FSYNC)0.005761
> write (O_DSYNC)0.005894
> 
> So fdatasync faster for me? 
> 
> 
> > Hi everybody!
> > 
> > Which wal sync method is the fastest under linux 2.6.x?
> > I'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.
> > 
> > I've tried to switch to open_sync which seems to work 
> > faster than default fdatasync, but is it crash-safe?
> 
> 
> 
> 
> Javier Somoza
> Oficina de Direcci?n Estrat?gica
> mailto:[EMAIL PROTECTED]
> 
> Panda Software
> Buenos Aires, 12
> 48001 BILBAO - ESPA?A
> Tel?fono: 902 24 365 4
> Fax:  94 424 46 97
> http://www.pandasoftware.es
> Panda Software, una de las principales compa??as desarrolladoras de
> soluciones de protecci?n contra virus e intrusos, presenta su nueva
> familia de soluciones. Todos los usuarios de ordenadores, desde las
> redes m?s grandes a los dom?sticos, disponen ahora de nuevos productos
> con excelentes tecnolog?as de seguridad. M?s informaci?n en:
> http://www.pandasoftware.es/productos
> 
> 
> 
> ?Prot?jase ahora contra virus e intrusos! Pruebe gratis nuestros
> productos en http://www.pandasoftware.es/descargas/
> 
> 
> 
> 
> 
> 
> 
> 
> 

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] fsync and battery-backed caches

2006-02-27 Thread Bruce Momjian
Jim C. Nasby wrote:
> On Mon, Feb 27, 2006 at 11:12:57AM +0100, Tino Wildenhain wrote:
> > Javier Somoza schrieb:
> > >
> > >Hi all
> > >
> > >Is it secure to disable fsync havin battery-backed disk cache?
> > >   
> > >Thx
> > >
> > No. fsync moves the data from OS memory cache to disk-adaptor
> > cache which is required to benefit from battery backup.
> 
> More importantly, in guarantees that data is committed to non-volatile
> storage in such a way that PostgreSQL can recover from a crash without
> corruption.
> 
> If you have a battery-backed controller and turn on write caching you
> shouldn't see much effect from fsync anyway.

We do mention battery-backed cache in our docs:

http://www.postgresql.org/docs/8.1/static/wal.html

If it is unclear, please let us know.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] fsync and battery-backed caches

2006-02-27 Thread Jim C. Nasby
On Mon, Feb 27, 2006 at 11:12:57AM +0100, Tino Wildenhain wrote:
> Javier Somoza schrieb:
> >
> >Hi all
> >
> >Is it secure to disable fsync havin battery-backed disk cache?
> >   
> >Thx
> >
> No. fsync moves the data from OS memory cache to disk-adaptor
> cache which is required to benefit from battery backup.

More importantly, in guarantees that data is committed to non-volatile
storage in such a way that PostgreSQL can recover from a crash without
corruption.

If you have a battery-backed controller and turn on write caching you
shouldn't see much effect from fsync anyway.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Large Table With Only a Few Rows

2006-02-27 Thread Peter Childs
On 27/02/06, Chris Browne <[EMAIL PROTECTED]> wrote:
"Nik" <[EMAIL PROTECTED]> writes:> I have a table that has only a few records in it at the time, and they> get deleted every few seconds and new records are inserted. Table never
> has more than 5-10 records in it.>> However, I noticed a deteriorating performance in deletes and inserts> on it. So I performed vacuum analyze on it three times (twice in a row,> and once two days later). In the statistics it says that the table size
> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even> though the table has only 5-10 rows in it it. I was wondering how can I> reclaim all this space and improve the performance?
You need to run VACUUM ANALYZE on this table very frequently.Based on what you describe, "very frequently" should be on the orderof at least once per minute.Schedule a cron job specifically to vacuum this table, with a cron
entry like the following:* * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_databaseOf course, you need to bring it back down to size, first.You could run CLUSTER on the table to bring it back down to size;
that's probably the fastest way...   cluster my_table_pk on my_table;VACUUM FULL would also do the job, but probably not as quickly.--(reverse (concatenate 'string "gro.gultn" "@" "enworbbc"))
http://cbbrowne.com/info/sgml.html"Now they can put you in jail if they *THINK* you're gonna commit acrime.  Let me say that again, because it sounds vaguely important"
--george carlin---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not
   match
You probably want to do one or two other things.

1> Switch on autovacuum.

2> improve the setting of max_fsm_pages in your postgresql.conf a restart will be required.

if you do a "vacuum verbose;" the last couple of lines should tell you
how much free space is about against how much free space the database
can actuall remember to use. 

INFO:  free space map contains 5464 pages in 303 relations
DETAIL:  A total of 9760 page slots are in use (including overhead).
9760 page slots are required to track all free space.
Current limits are:  4 page slots, 1000 relations, using 299 KB.

if the required page slots (9760 in my case) goes above the current
limit (4 in my case) you will need to do a vacuum full to reclaim
the free space. (cluster of the relevent tables may work.

If you run Vacuum Verbose regullally you can check you are vacuuming
often enough and that your free space map is big enough to hold your
free space.

Peter Childs
 


Re: [PERFORM] The trigger can be specified to fire on time condition?

2006-02-27 Thread Chris Browne
[EMAIL PROTECTED] (Jamal Ghaffour) writes:
> Hi All, I ' m using the postgresql datbase to stores cookies. Theses
> cookies become invalid after 30 mn and have to be deleted. i have
> defined a procedure that will delete all invalid cookies, but i
> don't know how to call it in loop way (for example each hour).  I
> think that it possible because this behaivor is the same of the
> autovaccum procedure that handle the vaccum process every time (60s
> in default way).  After reading the documentation, it seems that
> triggers can't handle this stuff .  how can i resolve the problem ?

Time-based event scheduling is done using cron, external to the
database.
-- 
output = reverse("gro.mca" "@" "enworbbc")
http://cbbrowne.com/info/sgml.html
"Even in the  area of anticompetitive conduct, Microsoft  is mainly an
imitator." -- Ralph Nader (1998/11/11)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Large Table With Only a Few Rows

2006-02-27 Thread Chris Browne
"Nik" <[EMAIL PROTECTED]> writes:
> I have a table that has only a few records in it at the time, and they
> get deleted every few seconds and new records are inserted. Table never
> has more than 5-10 records in it.
>
> However, I noticed a deteriorating performance in deletes and inserts
> on it. So I performed vacuum analyze on it three times (twice in a row,
> and once two days later). In the statistics it says that the table size
> is 863Mb, toast table size is 246Mb, and indexes size is 134Mb, even
> though the table has only 5-10 rows in it it. I was wondering how can I
> reclaim all this space and improve the performance?

You need to run VACUUM ANALYZE on this table very frequently.

Based on what you describe, "very frequently" should be on the order
of at least once per minute.

Schedule a cron job specifically to vacuum this table, with a cron
entry like the following:

* * * * * /usr/local/bin/vacuumdb -z -t my_table -p 5432 my_database

Of course, you need to bring it back down to size, first.

You could run CLUSTER on the table to bring it back down to size;
that's probably the fastest way...

   cluster my_table_pk on my_table;

VACUUM FULL would also do the job, but probably not as quickly.
-- 
(reverse (concatenate 'string "gro.gultn" "@" "enworbbc"))
http://cbbrowne.com/info/sgml.html
"Now they can put you in jail if they *THINK* you're gonna commit a
crime.  Let me say that again, because it sounds vaguely important"
--george carlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] The trigger can be specified to fire on time condition?

2006-02-27 Thread Alvaro Herrera
Jamal Ghaffour wrote:
> Hi All,
> I ' m using the postgresql datbase to stores cookies. Theses cookies 
> become invalid after  30 mn and have to be deleted. i have defined a 
> procedure that will
> delete all invalid cookies, but i don't know how to call it in loop way 
> (for example each hour).
> I think that it  possible  because this behaivor is the same of the 
> autovaccum procedure that handle the vaccum process every time (60s in 
> default way).
> After reading the documentation, it seems  that triggers can't handle 
> this stuff .
> how can i resolve the problem ?

Use your system's crontab!  (On Windows, "scheduled tasks" or whatever).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] The trigger can be specified to fire on time condition?

2006-02-27 Thread Jamal Ghaffour

Hi All,
I ' m using the postgresql datbase to stores cookies. Theses cookies 
become invalid after  30 mn and have to be deleted. i have defined a 
procedure that will
delete all invalid cookies, but i don't know how to call it in loop way 
(for example each hour).
I think that it  possible  because this behaivor is the same of the 
autovaccum procedure that handle the vaccum process every time (60s in 
default way).
After reading the documentation, it seems  that triggers can't handle 
this stuff .

how can i resolve the problem ?


Thanks
begin:vcard
fn:Jamal Ghaffour
n:Ghaffour;Jamal
org:ELIOS Informatique
adr;quoted-printable:;;1, sq de ch=C3=AAne Germain,;CESSON SEVIGNE;;35510;FRANCE
email;internet:[EMAIL PROTECTED]
tel;work:(+33) 2.99.63.85.30
tel;fax:(+33) 2.99.63.85.93
tel;home:(+33) 2 99 36 73 96
tel;cell:(+33) 6.21.85.15.91
url:http://www.elios-informatique.fr
version:2.1
end:vcard


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] neverending vacuum

2006-02-27 Thread Csaba Nagy
> So one very effective way of speeding this process up is giving the
> vacuum process lots of memory, because it will have to do fewer passes
> at each index.  How much do you have?

OK, this is my problem... it is left at default (16 megabyte ?). This
must be a mistake in configuration, on other similar boxes I set this to
262144 (256 megabyte). The box has 4 Gbyte memory.

Thanks for the explanation - you were right on the spot, it will likely
solve the problem.

Cheers,
Csaba.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] neverending vacuum

2006-02-27 Thread Alvaro Herrera
Csaba Nagy wrote:

> I have a quite big table (about 200 million records, and ~2-3 million
> updates/~1 million inserts/few thousand deletes per day). I started a
> vacuum on it on friday evening, and it still runs now (monday
> afternoon). I used "vacuum verbose", and the output looks like:
> 
> [vacuums list all the indexes noting how many tuples it cleaned, then
> "restarts" and lists all the indexes again, then again ... ad nauseam]

What happens is this: the vacuum commands scans the heap and notes which
tuples need to be removed.  It needs to remember them in memory, but
memory is limited; it uses the maintenance_work_mem GUC setting to
figure out how much to use.  Within this memory it needs to store the
TIDs (absolute location) of tuples that need to be deleted.  When the
memory is filled, it stops scanning the heap and scans the first index,
looking for pointers to any of the tuples that were deleted in the heap.
Eventually it finds them all and goes to the next index: scan, delete
pointers.  Next index.  And so on, until all the indexes are done.

At this point, the first pass is done.  Vacuum must then continue
scanning the heap for the next set of TIDs, until it finds enough to
fill maintenance_work_mem.  Scan the indexes to clean them.  Start
again.  And again.

So one very effective way of speeding this process up is giving the
vacuum process lots of memory, because it will have to do fewer passes
at each index.  How much do you have?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] neverending vacuum

2006-02-27 Thread Csaba Nagy
Hi all,

Short story:

I have a quite big table (about 200 million records, and ~2-3 million
updates/~1 million inserts/few thousand deletes per day). I started a
vacuum on it on friday evening, and it still runs now (monday
afternoon). I used "vacuum verbose", and the output looks like:

INFO:  vacuuming "public.big_table"
INFO:  index "pk_big_table" now contains 223227480 row versions in
1069776 pages
DETAIL:  711140 index row versions were removed.
80669 index pages have been deleted, 80669 are currently reusable.
CPU 14.56s/46.42u sec elapsed 13987.65 sec.
INFO:  index "idx_big_table_1" now contains 223229722 row versions in
740108 pages
DETAIL:  711140 index row versions were removed.
58736 index pages have been deleted, 58733 are currently reusable.
CPU 12.90s/94.97u sec elapsed 10052.12 sec.
INFO:  index "idx_big_table_2" now contains 16779341 row versions in
55831 pages
DETAIL:  125895 index row versions were removed.
369 index pages have been deleted, 337 are currently reusable.
CPU 1.39s/5.81u sec elapsed 763.25 sec.
INFO:  index "idx_big_table_3" now contains 472945 row versions in 2536
pages
DETAIL:  5328 index row versions were removed.
595 index pages have been deleted, 595 are currently reusable.
CPU 0.06s/0.20u sec elapsed 35.36 sec.
INFO:  index "idx_big_table_4" now contains 471419 row versions in 2537
pages
DETAIL:  5318 index row versions were removed.
591 index pages have been deleted, 591 are currently reusable.
CPU 0.08s/0.21u sec elapsed 36.18 sec.
INFO:  "big_table": removed 2795984 row versions in 413228 pages
DETAIL:  CPU 22.19s/26.92u sec elapsed 5095.57 sec.
INFO:  index "pk_big_table" now contains 221069840 row versions in
1069780 pages
DETAIL:  2162406 index row versions were removed.
90604 index pages have been deleted, 80609 are currently reusable.
CPU 7.77s/15.92u sec elapsed 13576.07 sec.
INFO:  index "idx_big_table_1" now contains 221087391 row versions in
740109 pages
DETAIL:  2162406 index row versions were removed.
66116 index pages have been deleted, 58647 are currently reusable.
CPU 6.34s/23.22u sec elapsed 10592.02 sec.
INFO:  index "idx_big_table_2" now contains 16782762 row versions in
55831 pages
DETAIL:  21 index row versions were removed.
355 index pages have been deleted, 323 are currently reusable.
CPU 0.24s/0.78u sec elapsed 651.89 sec.
INFO:  index "idx_big_table_3" now contains 482084 row versions in 2536
pages
DETAIL:  525 index row versions were removed.
561 index pages have been deleted, 561 are currently reusable.
CPU 0.04s/0.10u sec elapsed 36.80 sec.
INFO:  index "idx_big_table_4" now contains 480575 row versions in 2537
pages
DETAIL:  525 index row versions were removed.
558 index pages have been deleted, 558 are currently reusable.
CPU 0.07s/0.17u sec elapsed 39.37 sec.
INFO:  "big_table": removed 2795985 row versions in 32975 pages
DETAIL:  CPU 0.96s/0.30u sec elapsed 232.51 sec.
INFO:  index "pk_big_table" now contains 218297352 row versions in
1069780 pages
DETAIL:  2795309 index row versions were removed.
103434 index pages have been deleted, 80489 are currently reusable.
CPU 10.40s/18.63u sec elapsed 14420.05 sec.
INFO:  index "idx_big_table_1" now contains 218310055 row versions in
740109 pages
DETAIL:  2795309 index row versions were removed.
75674 index pages have been deleted, 58591 are currently reusable.
CPU 6.46s/23.33u sec elapsed 10495.41 sec.
INFO:  index "idx_big_table_2" now contains 16782885 row versions in
55831 pages
DETAIL:  29 index row versions were removed.
354 index pages have been deleted, 322 are currently reusable.
CPU 0.24s/0.72u sec elapsed 653.09 sec.
INFO:  index "idx_big_table_3" now contains 491320 row versions in 2536
pages
DETAIL:  451 index row versions were removed.
529 index pages have been deleted, 529 are currently reusable.
CPU 0.02s/0.13u sec elapsed 36.83 sec.
INFO:  index "idx_big_table_4" now contains 489798 row versions in 2537
pages
DETAIL:  451 index row versions were removed.
522 index pages have been deleted, 522 are currently reusable.
CPU 0.03s/0.13u sec elapsed 36.50 sec.
INFO:  "big_table": removed 2795957 row versions in 32947 pages
DETAIL:  CPU 0.93s/0.28u sec elapsed 216.91 sec.
INFO:  index "pk_big_table" now contains 215519688 row versions in
1069780 pages
DETAIL:  2793693 index row versions were removed.
115142 index pages have been deleted, 80428 are currently reusable.
CPU 7.97s/16.05u sec elapsed 14921.06 sec.
INFO:  index "idx_big_table_1" now contains 215523269 row versions in
740109 pages
DETAIL:  2793693 index row versions were removed.
83819 index pages have been deleted, 58576 are currently reusable.
CPU 8.62s/34.15u sec elapsed 9607.76 sec.
INFO:  index "idx_big_table_2" now contains 16780518 row versions in
55831 pages
DETAIL:  2385 index row versions were removed.
362 index pages have been deleted, 322 are currently reusable.
CPU 0.20s/0.73u sec elapsed 701.77 sec.
INFO:  index "idx_big_table_3" now contains 492309 row versions in 2536
pages
DETAIL:  1097 index row versions w

Re: [PERFORM] Setting the shared buffers

2006-02-27 Thread Claus Guttesen
    How should i set this configuration? Depending on the memory?
    And then is it necessary to perform a benchmarking test?I've set it to 'shared_buffers = 12288' with 8 GB RAM on postgresql 7.4.9, FreeBSD 6.0. There is no exact size, depends on type of workload, server-OS etc. Adjust it up and down and see if your performance changes.
regardsClaus


[PERFORM] Setting the shared buffers

2006-02-27 Thread Javier Somoza





    Hi,

    How should i set this configuration? Depending on the memory?
    And then is it necessary to perform a benchmarking test?

    What did you do?

    Thx!
    





Javier Somoza
Oficina de Dirección Estratégica
mailto:[EMAIL PROTECTED]

Panda Software
Buenos Aires, 12
48001 BILBAO - ESPAÑA
Teléfono: 902 24 365 4
Fax:  94 424 46 97
http://www.pandasoftware.es
Panda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos



¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/




















Re: [PERFORM] wal sync method

2006-02-27 Thread Javier Somoza







    Hi Evgeny

    Im also testing what fsync method to use and using this program (http://archives.postgresql.org/pgsql-performance/2003-12/msg00191.php)
    a bit modified and i get this results:

    write  0.36
    write & fsync 0.006796
    write & fdatasync  0.001001
    write (O_FSYNC)    0.005761
    write (O_DSYNC)    0.005894

    So fdatasync faster for me? 


Hi everybody!

Which wal sync method is the fastest under linux 2.6.x?
I'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.

I've tried to switch to open_sync which seems to work 
faster than default fdatasync, but is it crash-safe?






Javier Somoza
Oficina de Dirección Estratégica
mailto:[EMAIL PROTECTED]

Panda Software
Buenos Aires, 12
48001 BILBAO - ESPAÑA
Teléfono: 902 24 365 4
Fax:  94 424 46 97
http://www.pandasoftware.es
Panda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos



¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/




















[PERFORM] wal sync method

2006-02-27 Thread Evgeny Gridasov
Hi everybody!

Which wal sync method is the fastest under linux 2.6.x?
I'm using RAID-10 (JFS filesystem), 2xXEON, 4 Gb RAM.

I've tried to switch to open_sync which seems to work 
faster than default fdatasync, but is it crash-safe?

-- 
Evgeny Gridasov
Software Engineer 
I-Free, Russia

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] fsync and battery-backed caches

2006-02-27 Thread Tino Wildenhain

Javier Somoza schrieb:


Hi all

Is it secure to disable fsync havin battery-backed disk cache?
   
Thx



No. fsync moves the data from OS memory cache to disk-adaptor
cache which is required to benefit from battery backup.

If this data is written to the plates immediately depends on settings
of your disk adaptor card.

Regards
Tino

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] fsync and battery-backed caches

2006-02-27 Thread Javier Somoza





    Hi all

    Is it secure to disable fsync havin battery-backed disk cache?
    
    Thx





Javier Somoza
Oficina de Dirección Estratégica
mailto:[EMAIL PROTECTED]

Panda Software
Buenos Aires, 12
48001 BILBAO - ESPAÑA
Teléfono: 902 24 365 4
Fax:  94 424 46 97
http://www.pandasoftware.es
Panda Software, una de las principales compañías desarrolladoras de soluciones de protección contra virus e intrusos, presenta su nueva familia de soluciones. Todos los usuarios de ordenadores, desde las redes más grandes a los domésticos, disponen ahora de nuevos productos con excelentes tecnologías de seguridad. Más información en: http://www.pandasoftware.es/productos



¡Protéjase ahora contra virus e intrusos! Pruebe gratis nuestros productos en http://www.pandasoftware.es/descargas/