在 2013年12月10日星期二 UTC+8下午5:28:21,Chris Pearce写道:
> Hi All,
> 
> Can we start using C++ STL containers like std::set, std::map, 
> std::queue in Mozilla code please? Many of the STL containers are more 
> convenient to use than our equivalents, and more familiar to new 
> contributors.
> 
> I understand that we used to have a policy of not using STL in mozilla 
> code since some older compilers we wanted to support didn't have very 
> good support, but I'd assume that that argument no longer holds since 
> already build and ship a bunch of third party code that uses std 
> containers (angle, webrtc, chromium IPC, crashreporter), and the sky 
> hasn't fallen.
> 
> I'm not proposing a mass rewrite converting nsTArray to std::vector, 
> just that we allow STL in new code.
> 
> Are there valid reasons why should we not allow C++ STL containers in 
> Mozilla code?
> 
> Cheers,
> Chris P.

Title:       The core of the core of the big data solutions -- Map
Author:      pengwenwei
Email:      
Language:    c++
Platform:    Windows, linux
Technology:  Perfect hash algorithm
Level:       Advanced
Description: Map algorithm with high performance
Section      MFC c++ map stl
SubSection   c++ algorithm
License:     (GPLv3)

    Download demo project - 1070 Kb
    Download source - 1070 Kb

Introduction:
For the c++ program, map is used everywhere.And bottleneck of program 
performance is often the performance of map.Especially in the case of large 
data,and the business association closely and unable to realize the data 
distribution and parallel processing condition.So the performance of map 
becomes the key technology.

In the work experience with telecommunications industry and the information 
security industry, I was dealing with the big bottom data,especially the most 
complex information security industry data,all can’t do without map.

For example, IP table, MAC table, telephone number list, domain name resolution 
table, ID number table query, the Trojan horse virus characteristic code of 
cloud killing etc..

The map of STL library using binary chop, its has the worst performance.Google 
Hash map has the optimal performance and memory at present, but it has repeated 
collision probability.Now the big data rarely use a collision probability 
map,especially relating to fees, can’t be wrong.

Now I put my algorithms out here,there are three kinds of map,after the build 
is Hash map.We can test the comparison,my algorithm has the zero probability of 
collision,but its performance is also better than the hash algorithm, even its 
ordinary performance has no much difference with Google.

My algorithm is perfect hash algorithm,its key index and the principle of 
compression algorithm is out of the ordinary,the most important is a completely 
different structure,so the key index compression  is fundamentally 
different.The most direct benefit for program is that for the original map need 
ten servers for solutions but now I only need one server.
Declare: the code can not be used for commercial purposes, if for commercial 
applications,you can contact me with QQ 75293192.
Download:
https://sourceforge.net/projects/pwwhashmap/files

Applications:
First,modern warfare can’t be without the mass of information query, if the 
query of enemy target information slows down a second, it could lead to the 
delaying fighter, leading to failure of the entire war. Information retrieval 
is inseparable from the map, if military products use pwwhashMap instead of the 
traditional map,you must be the winner.

Scond,the performance of the router determines the surfing speed, just replace 
open source router code map for pwwHashMap, its speed can increase ten times.
There are many tables to query and set in the router DHCP ptotocol,such as 
IP,Mac ,and all these are completed by map.But until now,all map are  using STL 
liabrary,its performance is very low,and using the Hash map has error 
probability,so it can only use multi router packet dispersion treatment.If 
using pwwHashMap, you can save at least ten sets of equipment.

Third,Hadoop is recognized as the big data solutions at present,and its most 
fundamental thing is super heavy use of the map,instead of SQL and table.Hadoop 
assumes the huge amounts of data so that the data is completely unable to move, 
people must carry on the data analysis in the local.But as long as the open 
source Hadoop code of the map changes into pwwHashMap, the performance will 
increase hundredfold without any problems.


Background to this article that may be useful such as an introduction to the 
basic ideas presented:
http://blog.csdn.net/chixinmuzi/article/details/1727195

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to