Nice replies, thanks.   To be clear again, no criticism is/was meant in
my post.  Just my feedback about the issues we faced.

philchillbill wrote: 
> 
> In reality, what other credentials might even be relevant? See
> https://mediaserver.smartskills.tech/ac-link-lms.html. It already says "
> Type the username and password you previously chose in the
> configurator". How could that be clearer?
> 

I thought for a moment it might be my ngrok account, since I'd just
created it, or my Amazon account, since the script was trying to
authorize a skill.  It just wasn't clear to me, after having gone
through all the steps.  It became clearer, after I thought about it, but
I'm not sure everyone has sufficient technical background to understand
this.  There are three sets of credentials that could be perceived as
possible in this process: the ngrok account which I'd just created, my
amazon credentials, or some new tunnel credentials.  Others have also
indicated some confusion here too.  For all I knew at the moment, the
process might have been trying to connect to my ngrok account.

Since the configurator says "ngrok authtoken", perhaps the username and
password should also be as specific.  The tooltips were helpful, but I
didn't see those.

"ngrok tunnel authtoken"
"new ngrok tunnel username"
"new ngrok tunnel password"

philchillbill wrote: 
> 
> With LMS-lite, a mere "Alexa, favorite three on Kitchen" will play
> favorite number 3 on your Kitchen player. That's just 5 words. 
> 

Oh, if only that was all that was required, it would not be an issue. 
But those 5 words are not sufficient.

1. Needed a Routine to combine Skill:Elan and Skill:LMS-Lite to turn on
the Elan amp, and select a zone.  I opted for a simple test: Alexa,
Gumby.  And that turned on Favorite 1 on Squeeze 2.
2. Needed a Routine to switch to another radio stream.  Alexa Pokey. 
KQED now is playing.
3. Needed a Routine to switch off the Elan amp + Touch, so have to
remember this verbiage.
4. Needed several Routines to change the volume up/down/mute - these
call the Elan skill.  I would opt for a Routine because nobody is going
to remember (and hence would require cheat sheets) the idiosyncrasies of
the syntax's for all these different skills, for each operation, esp.
when kitchen timers are going off, audio is too loud, and there are time
pressures.  We have Alexa, skills and routines for Lutron, Elan, several
other devices; MediaServer and LMS-Lite (each with their differences,
and restrictions) had to many restrictions to natural language and
expectations.
5. Needed to remember which favorites are located in which positions. 
Which one is in slot 3 again?  I can't recall, so I'll open Material -
ok #1.  Heh, that's the wrong station!   Ugh, the material vs default
skin ordering issue.
6. Wife needs to remember when she's in her closet, or the master bath,
to use Alexa's native language to play BBC World News.  But in the
kitchen, its, uh, Alexa, favorite three on Kitchen.  And all the
variants for changing stations, volume, stop/start/pause/rewind.  And
there are dozens of variations of these stop, pause, resume, go back,
kind of commands throughout all the devices.

Again, this is not a criticism.  Its just commentary on our experience
where the benchmark for success is the utterly simple Alexa, Play KQED,
and Alexa Stop.


philchillbill wrote: 
> 
> We cannot reduce it to just "Play BBC World News"
> 

That's exactly why I opted for testing with a Routine Alexa Gumby. 
Easy.  No thinking.  We could test the limits of the possible
efficiency, are test against that aforementioned benchmark.

And this is the way we control most of our devices for routine
operations.  Alexa, TV time.  Alexa Time for coffee.  Alexa, Shower
time.  We've focused on tasks and scenes, not on individual commands, as
we've found those to be too idiosyncratic and error prone.

philchillbill wrote: 
> 
> The way Alexa works is that she hears what you say, decides on a skill
> that can handle it, and passes the parameters to the skill. If the
> commands were stripped from MediaServer but nevertheless spoken by the
> user, Alexa would merely redirect to a different skill (or music
> service) to process. So Amazon Music or TuneIn (as fallbacks) would be
> reacting instead of MediaServer, but it would still be a mess. Fact is,
> voice assistants (as you point out) need the user to remember pretty
> exact syntax or it's game-over.
> 

Yeah, I get it.  Two things here that i was assuming might be possible:

One: allow disabling the command, accept it, and just bonk when it was
issued.  But never show it in the list of Help.  The idea being to
reduce the noise that comes back when I want help.  This would for us 
eliminate all the commands re: alarms, synchronizing, grouping, volume
controls, bookmarks, transfers, sleep, power on/off, shuffle, repeat,
list my players (we know them), rename, enable/disable artwork.  That's
a huge number reduced to the simple audio choices we'd want.

Two: My hope was that if users like us, who would never play back to an
Echo or other LMS capable audio device and would only ever play back to
one of our two Touch devices, might be able to configure MediaServer to
usurp those parameters, and re-target the device/command to the
appropriate to the Touch devices.  So if I said 'Stream" instead of
'Play', and I've configured MediaServer that I don't ever want to target
the Echo, MediaServer targets instead our (assumed) Touch device.  We
won't ever say "stream", but again, just the redirecting the commands at
what we have/want would greatly reduce the complexity and memory
matrix.

philchillbill wrote: 
> 
> That can only boil down to her not even launching MediaServer but
> answering via a different skill/service. If she 'forgot' the assumed
> player then MediaServer would fall back to first-run mode. You would
> have been told that an automatic player discovery was going to be
> performed. And your player names would have been read out. If that
> didn't happen, you didn't invoke MediaServer.
> 

We did go through the discovery.  I think it was part of the learning
process.  For some commands we could omit the player, for others, it was
required.  Once this got screwed up, it took mental effort to recover. 
(e.g. LMS-Lite requires the player name for some commands, but not
others).  It all required thinking about what language to use under any
given set of circumstances, and having enough road mileage to know when
Alexa would assume control vs. pass onto a skill.

philchillbill wrote: 
> 
> Amazon requires invocation for custom skills. An invocation name must be
> two words (Media Server in this case). Given those Amazon-enforced
> restrictions, what might work better? Also, what's the alternative? A
> teacher in front of a class asks a question. All the kids raise their
> hands. If she wants little Johnny to answer, she has to specify that.
> This is no different...
> 

I didn't know about the 2 word requirement, but do get the invocation
requirement.  Yet, I think you'd agree that certain two word skill names
such as Paradoxically Loquacious, or Elemental Elements would not be
good choices. :-)

philchillbill wrote: 
> 
> My own biggest personal gripe with Alexa is lack of tolerance for any
> humming-and-hawing. One slight slip in a long command and it all goes
> south. Hopefully that will improve over time. The whole concept of AI is
> very over-hyped and the actual intelligence under the hood often leaves
> a lot to be desired, so we still have a ways to go :rolleyes:

Agreed.  We had a lot of eye-rolling, and "Ok, we're done with this"
moments.


------------------------------------------------------------------------
MrC's Profile: http://forums.slimdevices.com/member.php?userid=468
View this thread: http://forums.slimdevices.com/showthread.php?t=111016

_______________________________________________
plugins mailing list
plugins@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/plugins

Reply via email to