carefully at the error message, the types you're passing in don't
> match. For instance, you're passing in a message handler that returns
> a tuple, but the rdd return type you're specifying (the 5th type
> argument) is just String.
>
> On Fri, May 6, 2016 at 9:49 AM, Eric Friedman &l
'com.yammer.metrics:metrics-core:2.2.0'
On Fri, May 6, 2016 at 7:47 AM, Eric Friedman <eric.d.fried...@gmail.com>
wrote:
> Hello,
>
> I've been using createDirectStream with Kafka and now need to switch to
> the version of that API that lets me supply offsets for my topics. I'm
> unable t
Hello,
I've been using createDirectStream with Kafka and now need to switch to the
version of that API that lets me supply offsets for my topics. I'm unable
to get this to compile for some reason, even if I lift the very same usage
from the Spark test suite.
I'm calling it like this:
val
Hello,
Where in the Spark APIs can I get access to the Hadoop Context instance? I
am trying to implement the Spark equivalent of this
public void reduce(Text key, Iterable values, Context
context)
throws IOException, InterruptedException {
if (record == null) {
Hello,
I have a table partitioned by year/month/day/hour/minute, where minute is a
10 minute slice. Each minute slice gets about 1GB.
I'd like to rollup the older partitions by hour. I've seen this done via
MapReduce (the resulting partition has __HIVE_DEFAULT_PARTITION__ in the
minute
installed but not JDK 7 and
it's somehow still finding the Java 6 javac.
On Tue, Aug 25, 2015 at 3:45 AM, Eric Friedman
eric.d.fried...@gmail.com wrote:
I'm trying to build Spark 1.4 with Java 7 and despite having that as my
JAVA_HOME, I get
[INFO] --- scala-maven-plugin:3.2.2:compile (scala
I'm trying to build Spark 1.4 with Java 7 and despite having that as my
JAVA_HOME, I get
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @
spark-launcher_2.10 ---
[INFO] Using zinc server for incremental compilation
[info] Compiling 8 Java sources to
If I have a Hive table with six columns and create a DataFrame (Spark
1.4.1) using a sqlContext.sql(select * from ...) query, the resulting
physical plan shown by explain reflects the goal of returning all six
columns.
If I then call select(one_column) on that first DataFrame, the resulting
In preparing a DataFrame (spark 1.4) to use with MLlib's kmeans.train
method, is there a cleaner way to create the Vectors than this?
data.map{r = Vectors.dense(r.getDouble(0), r.getDouble(3), r.getDouble(4),
r.getDouble(5), r.getDouble(6))}
Second, once I train the model and call predict on my
Eric Friedman created SPARK-8566:
Summary: lateral view query blows up unless pushed down to a
subquery
Key: SPARK-8566
URL: https://issues.apache.org/jira/browse/SPARK-8566
Project: Spark
I logged this Jira this morning:
https://issues.apache.org/jira/browse/SPARK-8566
I'm curious if any of the cognoscenti can advise as to a likely cause of
the problem?
[
https://issues.apache.org/jira/browse/SPARK-8566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598159#comment-14598159
]
Eric Friedman commented on SPARK-8566:
--
Unfortunately I cannot disclose the exact
, there is SPARK-6380
https://issues.apache.org/jira/browse/SPARK-6380 that hopes to simplify
this particular case.
Michael
On Sat, Mar 21, 2015 at 3:02 PM, Eric Friedman eric.d.fried...@gmail.com
wrote:
I have a couple of data frames that I pulled from SparkSQL and the
primary key of one is a foreign
)
df1.join(df2, df1(column_id) === df2(column_id)).select(t1.column_id)
Finally, there is SPARK-6380
https://issues.apache.org/jira/browse/SPARK-6380 that hopes to simplify
this particular case.
Michael
On Sat, Mar 21, 2015 at 3:02 PM, Eric Friedman eric.d.fried...@gmail.com
wrote:
I
I have a couple of data frames that I pulled from SparkSQL and the primary
key of one is a foreign key of the same name in the other. I'd rather not
have to specify each column in the SELECT statement just so that I can
rename this single column.
When I try to join the data frames, I get an
My job crashes with a bunch of these messages in the YARN logs.
What are the appropriate steps in troubleshooting?
15/03/19 23:29:45 ERROR shuffle.RetryingBlockFetcher: Exception while
beginning fetch of 10 outstanding blocks (after 3 retries)
15/03/19 23:29:45 ERROR
seeing that particular error before. It indicates to me
that the SparkContext is null. Is this maybe a knock-on error from the
SparkContext not initializing? I can see it would then cause this to
fail to init.
On Tue, Mar 17, 2015 at 7:16 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
Yes
, 2015 at 7:43 AM, Sean Owen so...@cloudera.com wrote:
OK, did you build with YARN support (-Pyarn)? and the right
incantation of flags like -Phadoop-2.4
-Dhadoop.version=2.5.0-cdh5.3.2 or similar?
On Tue, Mar 17, 2015 at 2:39 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
I did not find
surprise me, but then, why are these two
builds distributed?
On Sun, Mar 15, 2015 at 6:22 AM, Eric Friedman
eric.d.fried...@gmail.com wrote:
Is there a reason why the prebuilt releases don't include current CDH
distros and YARN support?
Eric Friedman
Is there a reason why the prebuilt releases don't include current CDH distros
and YARN support?
Eric Friedman
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
The Python installed in your cluster is 2.5. You need at least 2.6.
Eric Friedman
On Dec 30, 2014, at 7:45 AM, Jaggu jagana...@gmail.com wrote:
Hi Team,
I was trying to execute a Pyspark code in cluster. It gives me the following
error. (Wne I run the same job in local
:27 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
Was your spark assembly jarred with Java 7? There's a known issue with jar
files made with that version. It prevents them from being used on
PYTHONPATH. You can rejar with Java 6 for better results.
Eric Friedman
On Dec 29, 2014
how grateful I am
to have a usable release in 1.2 and look forward to 1.3 and beyond with real
excitement.
Eric Friedman
On Dec 28, 2014, at 5:40 PM, Patrick Wendell pwend...@gmail.com wrote:
Hey Eric,
I'm just curious - which specific features in 1.2 do you find most
help
Was your spark assembly jarred with Java 7? There's a known issue with jar
files made with that version. It prevents them from being used on PYTHONPATH.
You can rejar with Java 6 for better results.
Eric Friedman
On Dec 29, 2014, at 8:01 AM, Naveen Kumar Pokala npok...@spcapitaliq.com
intriguing.
Getting GraphX for PySpark would be very welcome.
It's easy to find fault, of course. I do want to say again how grateful I am to
have a usable release in 1.2 and look forward to 1.3 and beyond with real
excitement.
Eric Friedman
On Dec 28, 2014, at 5:40 PM, Patrick Wendell
Spark 1.2.0 is SO much more usable than previous releases -- many thanks to
the team for this release.
A question about progress of actions. I can see how things are progressing
using the Spark UI. I can also see the nice ASCII art animation on the
spark driver console.
Has anyone come up with
+1
Eric Friedman
On Oct 9, 2014, at 12:11 AM, Sung Hwan Chung coded...@cs.stanford.edu wrote:
Are there a large number of non-deterministic lineage operators?
This seems like a pretty big caveat, particularly for casual programmers who
expect consistent semantics between Spark
[
https://issues.apache.org/jira/browse/SPARK-3604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14140713#comment-14140713
]
Eric Friedman commented on SPARK-3604:
--
many more frames of the same content than
I have a SchemaRDD which I've gotten from a parquetFile.
Did some transforms on it and now want to save it back out as parquet again.
Getting a SchemaRDD proves challenging because some of my fields can be
null/None and SQLContext.inferSchema abjects those.
So, I decided to use the schema on
are investigating.
On Thu, Sep 18, 2014 at 8:49 AM, Eric Friedman
eric.d.fried...@gmail.com
wrote:
I have a SchemaRDD which I've gotten from a parquetFile.
Did some transforms on it and now want to save it back out as parquet
again.
Getting a SchemaRDD proves challenging because some of my
How many partitions do you have in your input rdd? Are you specifying
numPartitions in subsequent calls to groupByKey/reduceByKey?
On Sep 17, 2014, at 4:38 AM, Oleg Ruchovets oruchov...@gmail.com wrote:
Hi ,
I am execution pyspark on yarn.
I have successfully executed initial dataset
, and create
RDD by
newAPIHadoopFile(), then union them together.
On Mon, Sep 15, 2014 at 5:49 AM, Eric Friedman
eric.d.fried...@gmail.com wrote:
I neglected to specify that I'm using pyspark. Doesn't look like these
APIs have been bridged.
Eric Friedman
On Sep 14, 2014, at 11:02 PM
sc.textFile takes a minimum # of partitions to use.
is there a way to get sc.newAPIHadoopFile to do the same?
I know I can repartition() and get a shuffle. I'm wondering if there's a
way to tell the underlying InputFormat (AvroParquet, in my case) how many
partitions to use at the outset.
What
, which appears
to be the intended replacement in the new APIs.
On Mon, Sep 15, 2014 at 9:35 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
sc.textFile takes a minimum # of partitions to use.
is there a way to get sc.newAPIHadoopFile to do the same?
I know I can repartition() and get
.
I'm also not sure if this is something to do with pyspark, since the
underlying Scala API takes a Configuration object rather than
dictionary.
On Mon, Sep 15, 2014 at 11:23 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
That would be awesome, but doesn't seem to have any effect
Hi,
I have a directory structure with parquet+avro data in it. There are a
couple of administrative files (.foo and/or _foo) that I need to ignore
when processing this data or Spark tries to read them as containing parquet
content, which they do not.
How can I set a PathFilter on the
Yes. And point that variable at your virtual env python.
Eric Friedman
On Aug 22, 2014, at 6:08 AM, Earthson earthson...@gmail.com wrote:
Do I have to deploy Python to every machine to make $PYSPARK_PYTHON work
correctly?
--
View this message in context:
http://apache-spark
+1 for such a document.
Eric Friedman
On Aug 15, 2014, at 1:10 PM, Kevin Markey kevin.mar...@oracle.com wrote:
Sandy and others:
Is there a single source of Yarn/Hadoop properties that should be set or
reset for running Spark on Yarn?
We've sort of stumbled through one property
I have a CDH5.0.3 cluster with Hive tables written in Parquet.
The tables have the DeprecatedParquetInputFormat on their metadata, and
when I try to select from one using Spark SQL, it blows up with a stack
trace like this:
java.lang.RuntimeException: java.lang.ClassNotFoundException:
done. Are you replacing / not using that?
On Sun, Aug 10, 2014 at 5:36 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
I have a CDH5.0.3 cluster with Hive tables written in Parquet.
The tables have the DeprecatedParquetInputFormat on their metadata, and
when I try to select from one
:20 PM, Eric Friedman
eric.d.fried...@gmail.com wrote:
Hi Sean,
Thanks for the reply. I'm on CDH 5.0.3 and upgrading the whole cluster to
5.1.0 will eventually happen but not immediately.
I've tried running the CDH spark-1.0 release and also building it from
source. This, unfortunately
Thanks Michael, I can try that too.
I know you guys aren't in sales/marketing (thank G-d), but given all the hoopla
about the CDH-DataBricks partnership, it'd be awesome if you guys were
somewhat more aligned, by which I mean that the DataBricks releases on Apache
that say for CDH5 would
On Sun, Aug 10, 2014 at 2:43 PM, Michael Armbrust mich...@databricks.com
wrote:
if I try to add hive-exec-0.12.0-cdh5.0.3.jar to my SPARK_CLASSPATH, in
order to get DeprecatedParquetInputFormat, I find out that there is an
incompatibility in the SerDeUtils class. Spark's Hive snapshot
Best Regards
On Thu, Jul 24, 2014 at 11:00 PM, Eric Friedman eric.d.fried...@gmail.com
wrote:
I'm trying to run a simple pipeline using PySpark, version 1.0.1
I've created an RDD over a parquetFile and am mapping the contents with a
transformer function and now wish to write the data
I understand that GraphX is not yet available for pyspark. I was wondering
if the Spark team has set a target release and timeframe for doing that
work?
Thank you,
Eric
I'm trying to run a simple pipeline using PySpark, version 1.0.1
I've created an RDD over a parquetFile and am mapping the contents with a
transformer function and now wish to write the data out to HDFS.
All of the executors fail with the same stack trace (below)
I do get a directory on HDFS,
I'm using spark 1.0.1 on a quite large cluster, with gobs of memory, etc.
Cluster resources are available to me via Yarn and I am seeing these
errors quite often.
ERROR YarnClientClusterScheduler: Lost executor 63 on host: remote Akka
client disassociated
This is in an interactive shell
, and the message you see is just a side effect.
Andrew
2014-07-23 8:27 GMT-07:00 Eric Friedman eric.d.fried...@gmail.com:
I'm using spark 1.0.1 on a quite large cluster, with gobs of memory, etc.
Cluster resources are available to me via Yarn and I am seeing these
errors quite often.
ERROR
.
On Wed, Jul 23, 2014 at 8:40 PM, Eric Friedman eric.d.fried...@gmail.com
wrote:
hi Andrew,
Thanks for your note. Yes, I see a stack trace now. It seems to be an
issue with python interpreting a function I wish to apply to an RDD. The
stack trace is below. The function is a simple
Can position be null? Looks like there may be constraints with predicate push
down in that case. https://github.com/apache/spark/pull/511/
On Jul 18, 2014, at 8:04 PM, Christos Kozanitis kozani...@berkeley.edu
wrote:
Hello
What is the order with which SparkSQL deserializes parquet
I used to use SPARK_LIBRARY_PATH to specify the location of native libs
for lzo compression when using spark 0.9.0.
The references to that environment variable have disappeared from the docs
for
spark 1.0.1 and it's not clear how to specify the location for lzo.
Any guidance?
Hi
I am working with a Cloudera 5 cluster with 192 nodes and can’t work out how to
get the spark repo to use more than 2 nodes in an interactive session.
So, this works, but is non-interactive (using yarn-client as MASTER)
-Sandy
On Mon, May 19, 2014 at 8:08 AM, Eric Friedman e...@spottedsnake.net wrote:
Hi
I am working with a Cloudera 5 cluster with 192 nodes and can’t work out how
to get the spark repo to use more than 2 nodes in an interactive session.
So, this works, but is non-interactive (using
All of the examples that I've found for training NB classifiers seem
to have textual data as input. Is there a way to build a classifier
with more general attributes?
I found this jira ticket
(https://issues.apache.org/jira/browse/MAHOUT-286), but it's been
closed:duplicate under
attributes) at that stage and skip the TF-IDF. It may need a little
hacking.
On Tue, Jul 31, 2012 at 6:21 PM, Eric Friedman e...@spottedsnake.net
wrote:
All of the examples that I've found for training NB classifiers seem
to have textual data as input. Is there a way to build a classifier
Hi all,
I am using windows remote desktop to access cygwin on a remote machine
with a disk on my local computer remote mounted, via remote desktop. On
the remote machine this disk shows up under my computer but is not
assigned a drive letter.
How do I access it from within cygwin? I can't
Versions: 1.4.1
Reporter: Eric Friedman
Suppose there are a set of dependencies like this:
a - b - c - d,e
d and e are the same module/organisation, but with different revisions. c
requires both revisions and they can coexist because they are in different
packages (d is antlr-2.7.6; e
At some point in the past requested a listing of subsections be provided
at the beginning of each section. I still think this is a nice feature.
I think it would be better, though, if this listing had some title.
For example, an Introduction section would look like this:
*Introduction*
Douglas Gregor wrote:
On Monday 10 November 2003 08:57 pm, Eric Friedman wrote:
Eric Friedman wrote:
Doug (or others)-
Right now it seems the description element has no effect on the
generated HTML when inside a typedef tag nested in a class. For
instance, I have documented the variant
Probably simple to implement, but sorely missing:
libraryname alt=MPLBoost Metaprogramming Library/libraryname
libraryname alt=AnyBoost.Any/libraryname
...and so on
You get the idea. Right now, alt is not supported.
Thanks,
Eric
---
Peter Dimov wrote:
[snip]
Provide operator. Wait six months. Collect feedback. If there is evidence
that operator is evil, remove it and document why it is not supplied.
OK, I'm willing to go along with this. I'll probably also include
operator==, with a similar plan for future evaluation.
Peter Dimov wrote:
When there is one and only one strict weak ordering (equality) for a
type, not using operator and operator== because some users might have
different expectations is misguided. It is pretty clear what setvariant
or
find(first, last, v) is supposed to do; variant_less or
Dave Gomboc wrote:
[snip]
I don't like get() because I cannot write x.get() when x is a POD. This
would mean I have to support nilableT and T with different code,
which is exactly what I'm trying to avoid.
Why not overload boost::get again for optional? This would certainly improve
Howard,
Howard Hinnant wrote:
[snip]
If you will mail me a complete condensed demo, I'll take a look. I
downloaded boost 1.30.2 but was unable to find boost/variant.
-Howard
Thanks for offering your assistance. Variant will make its debut in 1.31.
Thus, you'll need to work from Boost CVS
Alexander Nasonov
Eric Friedman wrote:
But suppose I have a variant v3, with content of a different type (call
it
T3). Then the assignment v1 = v3 is far more complicated (we can't use
T1::operator=) and, without double storage, far more dangerous. The
single
storage implementation
Alexander Nasonov wrote:
Eric Friedman wrote:
If I understand you correctly, earlier versions of variant did precisely
what you describe. Unfortunately, the assumption you make is false in
general. See
http://aspn.activestate.com/ASPN/Mail/Message/boost/1311813.
Eric
Well
Gennadiy Rozental wrote:
BTW, after looking at the implementation I was a bit disappointed to
see two copies of the storage. It seems to nullify one
important reason for using variants (space savings), and it generates
more code than a single-storage version. I know you had some
Dave,
David Abrahams wrote:
[snip]
If you'd
like to see relatively recently-generated HTML, check out
http://www.cs.rpi.edu/~gregod/boost/doc/html/variant.html.
Suggestion: check an index page into the CVS which redirects to this
page.
The link I provided above will not be home to the
Dear Boosters,
I've recently added reference support to variant. For instance, the
following is now supported:
int i = 3;
boost::variantint, double var(i);
i = 2;
BOOST_CHECK( boost::getint(var) == 2 );
However, such support required the addition of an additional variant
Oops! I forgot to attach the test file!
It's now attached.
Thanks,
Eric
Eric Friedman wrote:
Dear Boosters,
I've recently added reference support to variant. For instance, the
following is now supported:
int i = 3;
boost::variantint, double var(i);
i = 2;
BOOST_CHECK
Dave,
Please see the BoostBook reference documentation for variant. The HTML is
quite out of sync with the current implementation. I haven't removed it from
CVS yet though because I am still in the process of porting the examples,
etc. to BoostBook.
Sorry for the confusion.
Thanks,
Eric
Joel,
Joel de Guzman wrote:
[snip]
Also, is there a reason why we can't allow:
variantint, double var;
Nothing fundamental no, just some additional metaprogramming ;) It's
supported now (see variant_reference_test).
See below for a note on the semantics of the resultant variant type,
David Abrahams wrote:
[snip]
2.
All members of variant satisfy the strong guarantee of
exception-safety.
Seriously? What if an underlying type's assignment operator gives
only the basic guarantee? Surely, if you in fact use the
underlying type's assignment
David Abrahams wrote:
Eric Friedman [EMAIL PROTECTED] writes:
Dave,
Please see the BoostBook reference documentation for variant. The HTML
is
quite out of sync with the current implementation. I haven't removed it
from
CVS yet though because I am still in the process of porting
Allen Bierbaum wrote:
I just tried to use boost::variant with the HEAD version of boost and I
am getting multiply defined symbols in empty.hpp. (gcc 3.2 on Linux)
I fixed it by adding inline to the two non-template methods (see below):
inline bool operator==(const empty, const empty)
{
Hi Petr,
Petr Koèmíd wrote:
Hi,
There is a problem with variant library simply including boost/variant,
for
current anonymous cvs.
Can be demonstrated by compiling a binary tree example from the doc page.
Both
gcc 3.2 and 3.2.2 says:
* In file included from
John,
John Maddock wrote:
[snip]
Everything in suffix.hpp is generic macro workarounds - it's not dependent
upon specific compilers just whether the appropriate macro is defined.
I think you are going to have to use a dirty workaround here: check for
gcc
before using
Brian Simpson wrote:
Eric,
[snip]
Heavily qualified explanation to follow... :)
[snip]
Oops! I saw the typedef'd names, observed their similarity to the names
of
the template parameters, and immediately assumed that the switch was based
on those parameters. Therefore, the value I
Hartmut,
Hartmut Kaiser wrote:
Eric Friedman wrote:
[snip]
I believe the problem should now be fixed. Let me know if it
still doesn't work.
I've checked a new CVS snapshot just right now, but the problem
persists. Sorry.
I hadn't actually committed the fix at the time I posted
Dave (and others):
Eric Friedman wrote:
David Abrahams wrote:
Hi,
BOOST_EXPLICIT_TEMPLATE_TYPE is great!
However:
[snip]
// specialization
template
int fvoid( /*what goes here?*/ )
{
}
we have no mechanism for handling these. Any ideas
Hartmut,
Hartmut Kaiser wrote:
Beman Dawes wrote:
The variant library developers were checking in changes
almost daily until a week or two ago, so you might want to make sure
you have the latest from CVS.
Thanks for your response.
Yes, I have the latest CVS (Boost::HEAD) snapshot. BTW
David Abrahams wrote:
Matthias Troyer writes:
Dear Boosters,
Since some of the applications and libraries we plan on releasing soon
rely on Boost features and bugfixes that are in the CVS but not in
Boost 1.30.[012] I wonder what the plans are for the Boost 1.31.0
release? Since we
John Maddock wrote:
In adding output streaming support for variant, I've realized the
standard library packaged with gcc 2.9.7 and below does not support the
templated stream classes. I've also realized that Boost.Tuple features a
workaround addressing this same problem, with a comment to
In adding output streaming support for variant, I've realized the
standard library packaged with gcc 2.9.7 and below does not support the
templated stream classes. I've also realized that Boost.Tuple features a
workaround addressing this same problem, with a comment to add a defect
macro to
David Abrahams wrote:
Hi,
BOOST_EXPLICIT_TEMPLATE_TYPE is great!
However:
[snip]
// specialization
template
int fvoid( /*what goes here?*/ )
{
}
we have no mechanism for handling these. Any ideas?
Wouldn't BOOST_EXPLICIT_TEMPLATE_TYPE(void) work?
Eric
With the addition of the variant library has come several closely-related
components such as boost::getT, boost::apply_visitor,
boost::static_visitor, and boost::visitor_ptr. While I do plan to submit a
more general-purpose visitation library for review in the near feature,
currently these
Aleksey (and others),
I'm working on getting variant to compile under MSVC 6, but I've come
across what seems to be an ETI problem that needs a workaround.
However, I'm not sure what is the most appropriate way to make the fix.
Below is the error output from the regression tests
Aleksey (and all),
In working on porting boost::variant to Borland, I've come across some
trouble with a bug in the compiler.
Specfically, I'm getting Cannot have both a template class and
function named 'bind1st' and similarly for bind2nd. I know other MPL
headers use
David Abrahams writes:
Eric Friedman [EMAIL PROTECTED] writes:
I've found that mpl::is_sequence fails to operate correctly on certain
types
under MSVC7. I haven't tested extensively, but there certainly seems to
be
some problem with class templates from namespace std. (The problem
likely
I apologize if this has already been asked, but why aren't the libs/mpl/test
sources included in regresssion testing? I know some tests are missing and
some are perhaps as robust as they might be, but it seems some testing is
better than no testing.
I'd like to write an appropriate jamfile and
David Abrahams wrote:
I'd like to write an appropriate jamfile and include it in CVS, unless
there
are objections.
There's already a Jamfile in libs/mpl/test. It's at version 1.9.
Oops, I missed this. Thanks.
Anyhow, my concern related more to the regression tables. But with Beman's
Gennadiy Rozental wrote:
I argue that top-level const type arguments are meaningless in the
context
of variant. Given the example you provide:
typedef boost::variantint const, std::string const GlobalParameter;
GlobalParameter input_socket(12345);
input_socket = 54321; // no
Gennadiy Rozental wrote:
1. There is theoretical limits for the size of MPL sequences. See MPL
docs
(BOOST_MPL_LIMIT_LIST_SIZE for list)
2. You could limit variant support only for lists that does not exceed
your
own limit BOOST_VARIANT_LIMIT_TYPES.
Not true. There is absolutely no limit on
Gennadiy Rozental wrote:
2. Could type that implements swap() method somehow follow the second
case
road also? For example, could you somehow deduce T* from buffer and
swap
it
with local copy of the argument?
Yes, I can look into such optimizations. But as I noted in previous
Gennadiy Rozental wrote:
So what I want is
typedef boost::variantint const,std::string const GlobalParameter;
GlobalParameter input_socket( 12345 ); // localhost::12345
GlobalParameter output_socket( MultiplexorSocket );
[snip]
What if variant is the member of the class
Gennadiy Rozental wrote:
overview.) This technique is necessary to provide a general
guarantee
of strong exception-safety, which in turn is necessary to maintain
a never empty invariant for variant.
What is this invariant? And why is it that important.
The invariant is quite
Gennadiy Rozental wrote:
Eric Friedman [EMAIL PROTECTED] wrote:
[snip]
If variant is given types as a MPL-sequence (e.g., variant
mpl::listT1,
T2, ..., TN instead of variantT1, T2, ..., TN), then technique you
propose will not work. Please prove me incorrect, but I don't think you
can
Allen Bierbaum wrote:
Eric Friedman wrote:
Allen Bierbaum wrote:
I have been very impressed with the Variant library and started using it
with Boost 1.29.
Good to hear. I'd be interested in your experience using the library in
a
real-world (?) application.
I was able to get
Gennadiy Rozental:
templatetypename T
void foo( T const )
{
}
int main()
{
boost::variantint,. v = 5;
// Here I want to pass const reference to integer value of variant
to
function foo
// foo( getint( v ) ); - type T is incorrect
foo(
Gennadiy Rozental wrote:
While I do agree O(1) is better than O(N), I would like to point out
that
it is usable only when the pseudo-variadic template interface is used
(i.e.,
variantT1, T2, ..., TN as opposed to variantTypes).
Why? And to be absolutely clear: what do you mean by it?
By
1 - 100 of 130 matches
Mail list logo