Remove me too.
> On Jul 16, 2021, at 1:48 PM, Prajwal Nagaraj wrote:
>
> I don't even know who you guys are, I'm nowhere near Rajasthan someone
> must've sent you a wrong email address.
> Please remove me from your mailing list.
>
>
> Thank you
>
> On Fri, 16 Jul, 2021, 1:45 pm Parth
And please send any app_ids privately to me since this list is public.
On Tue, May 19, 2020 at 4:32 PM Rahul Ravindran wrote:
> Could you send the app-id for apps which are having trouble deploying via
> appcfg?
>
> On Tue, May 19, 2020 at 12:11 PM Linus Larsen
> wrote:
&g
Could you send the app-id for apps which are having trouble deploying via
appcfg?
On Tue, May 19, 2020 at 12:11 PM Linus Larsen
wrote:
> I just tried updating another service (I'm using Java) which now fails:
>
> 98% Application deployment failed. Message: Deployments using appcfg are
> no
We consider jpeg, mpeg and some file formats as not compressible and hence
do not compress this content-type
~Rahul.
On Thu, Apr 4, 2019 at 4:26 PM Joshua Smith
wrote:
> I didn’t get an answer in either place. But my experience has been that
> this list tends to produce answers whereas SO is
Looks like a typo. Could you try app-engine-Python?
I will file a bug to fix the doc
On Tue, Mar 19, 2019 at 10:01 PM Will H wrote:
> In the quickstart here:
> https://cloud.google.com/appengine/docs/standard/python3/quickstart
> There is a step to install the gcloud component
That is the idea. I encourage you to participate in the early releases etc
to ensure your use case is being met. You may have additional steps to
enable caching.
On Tue, Feb 12, 2019 at 5:30 PM Bruce Sherwood
wrote:
> That is very good news indeed. It's not immediately obvious from that
>
Development of the new Python 3-compatible ndb client is happening in the
Google Cloud Python client library github repo at
https://github.com/googleapis/google-cloud-python/tree/master/ndb . The
library is not usable as-is yet, but work is in progress and can be
monitored there.
On Tue, Feb 12,
Google has a policy of a one year deprecation policy for any GA runtime.
Given that nothing has been announced yet, please know that your
application will continue running for at least a year and that will be
the *minimum
*period for you to need to do anything.
I apologize on being very brief
Hello,
Your measurement of your application on your laptop does not accurately
represent all the memory used. Firstly, you will need to look at the RSS
memory for the process. In addition, any resources taken by the operating
system, kernel are not accounted for in your measurement but is
Not an exact match, but close - Here is a sample with Django, python 3.7
and cloudsql
On Sat, Nov 10, 2018 at 4:21 PM Charles tenorio
wrote:
> Is anyone using django 2.0 like App Engine and python 3.7 and Cloud
> Datastore? if you can send me an example of CRUD! Thank you
>
> --
> You received
The instance might stay alive after it's been idle for 15 minutes, but you
won't be billed for it. Billing is based on 15 min blocks as long as there
is at least one active requet in the 15 min block.
We kill clones lazily to prevent excessive cold starts.
On Tue, Oct 9, 2018 at 5:22 AM vvv vvv
Did you have a chance to look at
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/datastore/cloud-client
?
On Fri, Sep 28, 2018 at 10:50 AM vvv vvv wrote:
> Hi George, thanks for answering. dev_appserver.py is for the standard
> environment Python 2.7, I'm trying to run a
Unfortunately, dev_appserver does not yet work with Python 3.x(See
https://cloud.google.com/appengine/docs/standard/python3/testing-and-deploying-your-app#local-dev-server).
You need to run it from a virtualenv which is running python 2.7.
Alternatively, as specified in
Could you paste the entire command surface?
Additionally, which version of Google Cloud SDK are you using?
On Tue, Sep 25, 2018 at 2:25 PM Dewey Gaedcke wrote:
> Thanks for the response and clarification!!
> I remember being told way back NOT to use venv with GAE & so all these
> posts where
22, 2018 at 2:58 PM BLONDEV INC wrote:
> Hey, I added the model to requirements.txt and made some modifications to
> account for that. It is now working, if VERY slowly.
>
> On Saturday, September 22, 2018 at 2:45:47 PM UTC-4, Rahul Ravindran wrote:
>>
>> So, you cannot
y.
>>
>>
>>
>> On Saturday, September 22, 2018 at 2:21:19 PM UTC-4, BLONDEV INC wrote:
>>>
>>> Yes. It works just fine...
>>>
>>>
>>>
>>> [image: PHOTO-2018-09-22-14-17-42.jpg]
>>>
>>>
>>
We don't use Conda. THis seems like an issue with your application. Can you
run this locally successfully?
On Sat, Sep 22, 2018 at 9:50 AM BLONDEV INC wrote:
>
> Hi,
>
> I am getting this error message when I make a GET request to my app's URL.
>
> File "/srv/main.py", line 12, in
New submission from Rahul Ravindran <rahu...@gmail.com>:
make run_profile_task
runs the tests and does not seem to have any mechanism to exclude tests that I
could find based on looking at the Makefile.
Previously, on Python 3.6, this test test_poplib was
failing(https://bugs.pyth
Hello,
What is your app-id where you are seeing this?
Thanks,
~Rahul.
On Tue, Oct 31, 2017 at 1:37 PM, PK wrote:
> Many requests fail, usually Ajax calls but I just got one in the UI. I am
> in US Central/python runtime anybody else experiencing instability?
>
> Error: Server
.
On Thursday, April 16, 2015 12:47 PM, Rahul Ravindran rahu...@yahoo.com
wrote:
Hi, Below is my flume config and I am attempting to get Load Balancing sink
group to LB across multiple machines. I see only 2 threads created for the
entire sink group when using load balancing sink group and see
need
to investigate.
-- Lars
From: Rahul Ravindran rahu...@yahoo.com.INVALID
To: user@hbase.apache.org user@hbase.apache.org
Sent: Thursday, December 25, 2014 11:37 PM
Subject: Determining regions with low HDFS locality index
Hi, When an Hbase RS goes down(possibly because
Hi, When an Hbase RS goes down(possibly because of hardware issues etc), the
regions get moved off that machine to other Region Servers. However, since the
new region servers do not have the backing HFiles, data locality for the newly
transitioned regions is not great and hence some of our
Rahul Ravindran created FLUME-2394:
--
Summary: Command line argument to disable monitoring for config
changes
Key: FLUME-2394
URL: https://issues.apache.org/jira/browse/FLUME-2394
Project: Flume
[
https://issues.apache.org/jira/browse/FLUME-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rahul Ravindran updated FLUME-2394:
---
Description: Flume monitors for changes to the config file and attempts to
re-initialize
Rahul Ravindran created FLUME-2395:
--
Summary: Flume does not shutdown cleanly on sending a term signal
when it is receiving events
Key: FLUME-2395
URL: https://issues.apache.org/jira/browse/FLUME-2395
Hello,
I created a parquet file our of MR and attempted to use Drill to query the
file.
select * from /tmp/part-00.parquet
. . . . . . . . . . . . . . . . . ;
SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J:
Hi,
We are currently on 0.94.2(CDH 4.2.1) and would likely upgrade to 0.94.15
(CDH 4.6) primarily to use the above fix. We have turned off automatic major
compactions. We load data into an hbase table every 2 minutes. Currently, we
are not using bulk load since it created compaction issues.
Hi,
We are using CDH flume 1.3 (which ships with 4.2.1). We see this error in our
flume logs in our production system and restarting flume did not help. Looking
at the flume code, it appears to be expecting the byte to be an OPERATION, but
is not. Any ideas on what happened?
Thanks,
~Rahul.
; Rahul Ravindran
rahu...@yahoo.com
Sent: Thursday, June 27, 2013 11:24 AM
Subject: Re: Flume error in FIleChannel
Looks like the file may have been corrupted. Can you verify if you are out of
disk space or can see something that might have caused the data to be corrupted?
Hari
On Thu, Jun 27
Hello,
I am trying to understand the downsides of having a large number of hfiles by
having a large hbase.hstore.compactionThreshold
This delays major compaction. However, the amount of data that needs to be
read and re-written as a single hfile during major compaction will remain the
same
hook for an earlier version of the row?
Thanks,
~Rahul.
From: Asaf Mesika asaf.mes...@gmail.com
To: user@hbase.apache.org user@hbase.apache.org; Rahul Ravindran
rahu...@yahoo.com
Sent: Tuesday, June 4, 2013 10:51 PM
Subject: Re: Scan + Gets are disk bound
Hi,
We are relatively new to Hbase, and we are hitting a roadblock on our scan
performance. I searched through the email archives and applied a bunch of the
recommendations there, but they did not improve much. So, I am hoping I am
missing something which you could guide me towards. Thanks in
hotspotting.
~Rahul.
From: anil gupta anilgupt...@gmail.com
To: user@hbase.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Tuesday, June 4, 2013 9:31 PM
Subject: Re: Scan + Gets are disk bound
On Tue, Jun 4, 2013 at 11:48 AM, Rahul Ravindran rahu
.
From: Anoop John anoop.hb...@gmail.com
To: user@hbase.apache.org; Rahul Ravindran rahu...@yahoo.com
Cc: anil gupta anilgupt...@gmail.com
Sent: Tuesday, June 4, 2013 10:44 PM
Subject: Re: Scan + Gets are disk bound
When you set time range on Scan, some files can get skipped
Hi,
Is there a rough estimate on when 1.4 may be shipped? We were primarily
looking for https://issues.apache.org/jira/browse/FLUME-997 and perhaps,
looking to port that to 1.3.1 or use 1.4 if it is looking to ship sometime
soon(by end of June)
~Rahul.
Pinging again since this has been happening a lot more frequently recently
From: Rahul Ravindran rahu...@yahoo.com
To: User-flume user@flume.apache.org
Sent: Tuesday, May 7, 2013 8:42 AM
Subject: IOException with HDFS-Sink:flushOrSync
Hi,
We have noticed
(but
it was not in CDH4.1.2)
Hari
--
Hari Shreedharan
On Monday, May 13, 2013 at 7:23 PM, Rahul Ravindran wrote:
We are using cdh 4.1.2 - Hadoop version 2.0.0. Looks like cdh 4.2.1 also uses
the same Hadoop version. Any suggestions on any mitigations?
Sent from my phone.Excuse the terseness.
On May 13
...@cloudera.com
To: user@flume.apache.org user@flume.apache.org; Rahul Ravindran
rahu...@yahoo.com
Sent: Monday, May 6, 2013 9:57 PM
Subject: Re: Usage of use-fast-replay for FileChannel
Did you have an issue with the checkpoint that the entire 6G of data was
replayed (look
Hi,
We have noticed this a few times now where we appear to have an IOException
from HDFS and this stops draining the channel until the flume process is
restarted. Below are the logs: namenode-v01-00b is the active namenode
(namenode-v01-00a is standby). We are using Quorum Journal Manager
Hi,
For FileChannel, how much of a performance improvement in replay times were
observed with use-fast-replay? We currently have use-fast-replay set to false
and were replaying about 6 G of data. We noticed replay times of about one
hour. I looked at the code and it appears that fast-replay
Hi,
Flume writes to HDFS(we use Cloudera 4.1.2 release and Flume 1.3.1) using the
HDFS nameservice which points to 2 namenodes (one of which is active and the
other is standby). When the HDFS service is restarted, the namenode which comes
up first becomes active. If the active namenode was
I have attached the zipped log file at
https://issues.apache.org/jira/browse/FLUME-1928
From: Hari Shreedharan hshreedha...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Monday, February 25, 2013 1:30 PM
Subject: Re: File
Rahul Ravindran created FLUME-1928:
--
Summary: File Channel
Key: FLUME-1928
URL: https://issues.apache.org/jira/browse/FLUME-1928
Project: Flume
Issue Type: Question
Affects Versions
[
https://issues.apache.org/jira/browse/FLUME-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rahul Ravindran updated FLUME-1928:
---
Attachment: fl.zip
File Channel
Key: FLUME-1928
From: Michael Segel michael_se...@hotmail.com
To: user@hbase.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Friday, February 15, 2013 9:24 AM
Subject: Re: Using HBase for Deduping
Interesting.
Surround with a Try Catch?
But it sounds like you're on the right path.
Happy
...@hotmail.com
To: user@hbase.apache.org
Cc: Rahul Ravindran rahu...@yahoo.com
Sent: Friday, February 15, 2013 4:36 AM
Subject: Re: Using HBase for Deduping
On Feb 15, 2013, at 3:07 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
Michael, this means read for every write?
Yes
Hi,
We have events which are delivered into our HDFS cluster which may be
duplicated. Each event has a UUID and we were hoping to leverage HBase to
dedupe them. We run a MapReduce job which would perform a lookup for each UUID
on HBase and then emit the event only if the UUID was absent and
From: Rahul Ravindran
Sent: 2/14/2013 11:41 AM
To: user@hbase.apache.org
Subject: Using HBase for Deduping
Hi,
We have events which are delivered into our HDFS cluster which may
be duplicated. Each event has a UUID and we were hoping to leverage
HBase to dedupe them. We run a MapReduce job
Hi,
We have events which are delivered into our HDFS cluster which may be
duplicated. Each event has a UUID and we were hoping to leverage HBase to
dedupe them. We run a MapReduce job which would perform a lookup for each UUID
on HBase and then emit the event only if the UUID was absent and
We can't rely on the the assumption event dupes will not dupe outside an hour
boundary. So, your take is that, doing a lookup per event within the MR job is
going to be bad?
From: Viral Bajaria viral.baja...@gmail.com
To: Rahul Ravindran rahu...@yahoo.com
Cc
and forth, we can take it off list too
and summarize the conversation for the list.
On Thu, Feb 14, 2013 at 1:07 PM, Rahul Ravindran rahu...@yahoo.com wrote:
We can't rely on the the assumption event dupes will not dupe outside an
hour boundary. So, your take is that, doing a lookup per
Re..sending.
From: Rahul Ravindran rahu...@yahoo.com
To: User-flume user@flume.apache.org
Sent: Thursday, January 31, 2013 2:39 PM
Subject: Security between Avro-source and Avro-sink
Hi,
Is there a way to have secure communications between 2 Flume
Hi Brock,
I created a JIRA https://issues.apache.org/jira/browse/FLUME-1900 which has
the log file attached.
~Rahul.
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Saturday, February 2, 2013 4:05 PM
Rahul Ravindran created FLUME-1900:
--
Summary: FileChannel Error
Key: FLUME-1900
URL: https://issues.apache.org/jira/browse/FLUME-1900
Project: Flume
Issue Type: Question
[
https://issues.apache.org/jira/browse/FLUME-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rahul Ravindran updated FLUME-1900:
---
Attachment: flume.log
FileChannel Error
-
Key: FLUME
Hi,
Is there a way to have secure communications between 2 Flume machines(one
which has an avro source which forwards data to an avro sink)?
Thanks,
~Rahul.
Hi,
Is there any additional management/monitoring abilities or anything else for
flume which is available via Cloudera Manager?
Thanks,
~Rahul.
Hi,
Is Flume 1.3 part of CDH4? Is Flume 1.3 part of any debian repo for
installation? I have the link for http://flume.apache.org/download.html which
gives me the tar file. However, this does not install Flume's dependencies.
Thanks,
~Rahul.
[
https://issues.apache.org/jira/browse/FLUME-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506319#comment-13506319
]
Rahul Ravindran commented on FLUME-1713:
Is there any way to get a patch
Hello,
I just joined the dev user mailing list and could not respond to the v1.3
voting thread.
We are looking to deploy Flume into our production environment prior to a
1.4 release and are hoping to use the Netcat source. It would be great if you
could get
[
https://issues.apache.org/jira/browse/FLUME-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506592#comment-13506592
]
Rahul Ravindran commented on FLUME-1713:
[~mpercy], I just joined the dev list
Thanks much!
From: Brock Noland br...@cloudera.com
To: dev@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Thursday, November 29, 2012 9:11 AM
Subject: Re: Request to add Flume-1713 into Flume v1.3 RC
I have committed this to flume-1.3.0 branch
flume-ng-doc/sphinx/FlumeUserGuide.rst b4a8868
flume-ng-node/src/test/java/org/apache/flume/source/TestNetcatSource.java
3c17d3d
Diff: https://reviews.apache.org/r/8220/diff/
Testing
---
Unit test added
Thanks,
Rahul Ravindran
[
https://issues.apache.org/jira/browse/FLUME-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rahul Ravindran updated FLUME-1713:
---
Attachment: final_patch.diff
Final patch after incorporating Mike's comments
generated e-mail. To reply, visit:
https://reviews.apache.org/r/8220/#review13797
---
On Nov. 26, 2012, 2:24 a.m., Rahul Ravindran wrote:
---
This is an automatically generated e-mail
://reviews.apache.org/r/8220/diff/
Testing
---
Unit test added
Thanks,
Rahul Ravindran
3c17d3d
Diff: https://reviews.apache.org/r/8220/diff/
Testing
---
Unit test added
Thanks,
Rahul Ravindran
[
https://issues.apache.org/jira/browse/FLUME-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13503598#comment-13503598
]
Rahul Ravindran commented on FLUME-1713:
Review board url: https
Hi,
This is primarily to try and address a flume upgrade scenario in the case of
any incompatible changes in future. I tried this with multiple processes of the
same version, and it appeared to work. Are there any concerns on running
multiple versions of flume on the same box (each with
does come up.
Thanks for all the info.
~Rahul.
From: Mike Percy mpe...@apache.org
To: user@flume.apache.org
Cc: Rahul Ravindran rahu...@yahoo.com
Sent: Wednesday, November 21, 2012 2:24 PM
Subject: Re: Running multiple flume versions on the same box
[
https://issues.apache.org/jira/browse/FLUME-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13502423#comment-13502423
]
Rahul Ravindran commented on FLUME-1713:
Review board id:https
On Mon, Nov 19, 2012 at 2:18 PM, Rahul Ravindran rahu...@yahoo.com wrote:
Are there other such libraries which will need to be downloaded? Is there a
well-defined location for the hadoop jar and any other jars that flume may
depend on?
is that hadoop-hdfs brings in a ton of other stuff which will not be
used in any box except the one running the hdfs sink.
Thanks,
~Rahul.
From: Hari Shreedharan hshreedha...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Monday
On Nov 19, 2012, at 4:27 PM, Rahul Ravindran rahu...@yahoo.com wrote:
That is unfortunate. Is it sufficient if I package just hadoop-common.jar or
is the recommended way essentially doing an apt-get install flume-ng which
will install the below
# apt-cache depends flume-ng
flume-ng
HAProxy has a TCP mode where it round robins TCP connections. Does it need to
understand something specific about the wire protocol used by Flume?
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent
Resending given I sent it during off-hours.
From: Rahul Ravindran rahu...@yahoo.com
To: user@flume.apache.org user@flume.apache.org
Sent: Tuesday, November 13, 2012 5:52 PM
Subject: Flume hops behind HAProxy
Hi,
Before I try it, I wanted to check
In the 1.3 snapshot documentation, I don't see anything about the spool
directory source. Is that ready?
Sent from my phone.Excuse the terseness.
On Nov 13, 2012, at 9:43 AM, Hari Shreedharan hshreedha...@cloudera.com wrote:
You can find the details of the components and how to wire them
, 2012 10:12 AM
Subject: Re: high level plugin architecture
Where are you seeing that? I see that documented in the 1.3.0 branch
under Spooling Directory Source
On Tue, Nov 13, 2012 at 11:57 AM, Rahul Ravindran rahu...@yahoo.com wrote:
In the 1.3 snapshot documentation, I don't see anything
to build trunk/1.3 branch or wait for 1.3
release).
Thanks
Hari
--
Hari Shreedharan
On Thursday, November 8, 2012 at 3:05 PM, Rahul Ravindran wrote:
Hello,
I wanted to perform a load test to get an idea of how we would look to
scale flume for our deployment. I have pasted
file channel with this source, we will result in double
writes to disk, correct? (one for the legacy log files which will be ingested
by the Spool Directory source, and the other for the WAL)
From: Rahul Ravindran rahu...@yahoo.com
To: user@flume.apache.org
source on failure?
Thanks,
~Rahul.
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Wednesday, November 7, 2012 11:48 AM
Subject: Re: Guarantees of the memory channel for delivering to sink
Hi,
Yes if you
Apologies. I am new to Flume, and I am probably missing something fairly
obvious. I am attempting to test using a timestamp interceptor and host
interceptor but I see only a sequence of numbers in the remote end.
Below is the flume config:
agent1.channels.ch1.type = MEMORY
Hi,
I am very new to Flume and we are hoping to use it for our log aggregation
into HDFS. I have a few questions below:
FileChannel will double our disk IO, which will affect IO performance on
certain performance sensitive machines. Hence, I was hoping to write a custom
Flume source which
?
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Tuesday, November 6, 2012 1:44 PM
Subject: Re: Guarantees of the memory channel for delivering to sink
But in your architecture you are going to write
[
https://issues.apache.org/jira/browse/FLUME-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491154#comment-13491154
]
Rahul Ravindran commented on FLUME-1227:
Is there a timeline on when this new
Hello,
Is there any way to compile the X library statically.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg
hello,
Any body know how can i create iconview in fltk2.0.
Add more friends to your messenger and enjoy! Go to
http://messenger.yahoo.com/invite/
___
fltk mailing list
fltk@easysw.com
http://lists.easysw.com/mailman/listinfo/fltk
How can i handle key press event for Input widget using fluid2.I am using
fltk2.0.Please help me.
Add more friends to your messenger and enjoy! Go to
http://messenger.yahoo.com/invite/
___
fltk mailing list
fltk@easysw.com
How can i handle key press event for Input widget using fluid2.I am using
fltk2.0.Please help me.
Add more friends to your messenger and enjoy! Go to
http://messenger.yahoo.com/invite/
___
fltk mailing list
fltk@easysw.com
hi,
How can i access keypress event for fltk2.0 by using fluild2.
please help me.
Connect with friends all over the world. Get Yahoo! India Messenger at
http://in.messenger.yahoo.com/?wm=n/
Sir,
It can possible to have textbox in nanox itself(not using window api or fltk
libraries)
if possible can u give me some hint or idea.
Thank you.
Unlimited freedom, unlimited storage. Get it now, on
http://help.yahoo.com/l/in/yahoo/mail/yahoomail/tools/tools-08.html/
sir,
Currently i am working on nano-x and created small application which create a
simple
window which has buttons and labels.
i am trying to set size of the font of the button and labels but it size
doesnot change.
i want to change the size of the font which is displayed on the button.
i used
thank u for the reply
i downloaded FLTK .But when executed some of the application
of FLTK they executed in separately and not on the nano-X server.
I liked the application But i wanted that when i execute the FLTK application
it should
not be display immediately but it should be displayed
Thank u for the reply.
But i tried the compiling and linking command :
gcc -O -lm -L/usr/X11R6/lib -lX11 -o mtest2 mtest2.c -lXft -lmwin -lmwimages
-lm /usr/X11R6/lib/libX11.a
but the following error is generated :
/usr/X11R6/lib/libmwin.a(font_freetype.o): In function
--- On Wed, 20/8/08, RAHUL RAVINDRAN [EMAIL PROTECTED] wrote:
From: RAHUL RAVINDRAN [EMAIL PROTECTED]
Subject: [nanogui] window's help
To: nanogui nanogui@linuxhacker.org
Date: Wednesday, 20 August, 2008, 7:19 PM
Sir,
In nanox how can i create a window which contain text box and button
Sir,
i went through the examples in microwindow.90/demos/mwin
In that i go throught mtest2.c mtest.c which uses window api.
i want to know how it is compiled and linked.
can u give the command for which i can compile and link my file
which uses
win32 api.
Because i created my
96 matches
Mail list logo