1
00:00:03.270 --> 00:00:23.630
Ryan Dymek: Yeah, went ahead and started that recording. Okay? And if you haven't used any, that's great. I just it's just wanted to see if anybody see see what people are using. Chat Jpt looks like, let me go ahead and end this. If you had an end, haven't answered, it's not a big deal. But I'll share these results here. So just to kind of see what people's favorites have kind of emerged Chad Jpt has kinda gotten the the big
2
00:00:23.690 --> 00:00:33.339
Ryan Dymek: notoriety, if you will. But they're built on an open standard, an open product line that basically has been
3
00:00:33.460 --> 00:00:44.480
Ryan Dymek: been developed Openai. And a lot of these things have actually built been built around the GPT. 3 or Gpt 4, framework. And so these products have emerged. And they've been very productized.
4
00:00:44.650 --> 00:00:54.679
Ryan Dymek: So now, though, you can do these chat bots, and you can have a kind of interactive work with these tools. And I'll show you some things today in the various tools we're looking at.
5
00:00:54.930 --> 00:01:03.960
Ryan Dymek: But yeah, these are a few of them. These are all emerging. These things are popping up all over the place. Yeah, I don't know if it's funny. It looks like nobody selected Groc.
6
00:01:04.410 --> 00:01:22.150
Ryan Dymek: the Xai that's actually in kind of in Beta right now. And you actually have to have a an X premium plus membership and things. So I was kind of. I was kind of wondering if anybody would select that one or not. But that one looks interesting, too. That's backed by, you know Elon Musk and the Twitter or X company.
7
00:01:22.170 --> 00:01:33.849
Ryan Dymek: And then you've got Google Google's notorious for being an AI company right? And then but Chat Gpt has kind of been the big one. In more recent time.
8
00:01:33.940 --> 00:01:40.040
Ryan Dymek: Chatgpt actually grew a number of users faster than any other technology that we know to date.
9
00:01:41.300 --> 00:01:44.259
Ryan Dymek: Within, I believe it was 2 months
10
00:01:44.400 --> 00:01:50.469
Ryan Dymek: they had hit a hundred, a hundred 1 million customers 100 million users
11
00:01:50.620 --> 00:01:52.600
Ryan Dymek: within 2 months of release.
12
00:01:53.033 --> 00:02:16.239
Ryan Dymek: So that's that's pretty crazy. But yeah, if you haven't used it, I I would actually encourage a lot of you to go out and explore some of these tools. A lot of them you can use for free just to kind of see what's see what's going on out there. It's really great at enhancing your job. Actually, no matter what you do crafting presentations, even you could use it to write.
13
00:02:16.240 --> 00:02:23.050
Ryan Dymek: write formal things, do marketing work, create images, imagery for things.
14
00:02:23.180 --> 00:02:32.239
Ryan Dymek: You can actually use it to write code. I don't know if anybody here is a programmer, but you can actually use it to help you produce maybe the the foundation of your code for something.
15
00:02:32.430 --> 00:02:35.070
Ryan Dymek: I'll actually show you some of that stuff here in this session.
16
00:02:36.080 --> 00:02:40.580
Ryan Dymek: So in that regard, let me go ahead and and stop that.
17
00:02:42.940 --> 00:02:47.940
Ryan Dymek: I've got a couple and a couple other questions for you. So we'll kind of play around with this.
18
00:02:49.880 --> 00:03:00.009
Ryan Dymek: We've got another one. This is kind of focused more on the people that have actually, or maybe here that are potentially creating models of their own. Okay.
19
00:03:00.286 --> 00:03:20.020
Ryan Dymek: yeah. And I'll definitely do that. Jay. So Jay meant, Jay actually asked a question basically, kind of start with what you know, kind of definition of this stuff. And we'll definitely do that. These are just kind of like high level questions just to kind of get the ball rolling. But that's kind of the very first topic on our agenda is kinda understanding. What is this stuff right? How does it generally work?
20
00:03:20.365 --> 00:03:42.500
Ryan Dymek: What are the kind of differentiators or different terms here that we'll use? That? I'm sure they've got people here from all all experience levels. So this next question actually is, if you are somebody that maybe has actually created your own AI model before. So maybe you're somebody actually in the Ml landscape. And this is just a yes or no.
21
00:03:42.650 --> 00:03:45.920
Ryan Dymek: So maybe you're a data scientist that's doing some of this stuff.
22
00:03:46.378 --> 00:03:48.469
Ryan Dymek: You've trained your own data.
23
00:03:48.820 --> 00:03:55.069
Ryan Dymek: By the way, I'm going to show you that first hand. We're going to do that today, too. Got some Demos lined up for us to to have some good fun on this.
24
00:03:55.430 --> 00:03:59.749
Ryan Dymek: But okay, it looks like so far about
25
00:04:00.410 --> 00:04:02.280
Ryan Dymek: bordering on 20%
26
00:04:05.818 --> 00:04:14.050
Ryan Dymek: so as far. Oh, the Urls for the courses. Yeah, that'd be great, actually. And if you could post the Urls for those couple of different courses that were that were mentioned, that'd be great.
27
00:04:14.280 --> 00:04:15.270
Axcel ILT: Absolutely.
28
00:04:15.540 --> 00:04:25.379
Ryan Dymek: Wonderful. Thank you. So looks like the general balance here about 1920% somewhere around that has created models before. So
29
00:04:25.380 --> 00:04:54.890
Ryan Dymek: some of the conversations we'll have today might be a little bit more directed at you, and then some of them are just for everybody. Right? I won't. I won't alienate all those people that came here that haven't created models before, maybe maybe don't even quite know what a model actually is and what that means. And and so we'll kind of explain all of that. We do only have, you know, 2 h. We don't have multiple days. Obviously, some of those classes that we offer. We can kind of start a little more at at ground level and work our way up and get you even to, you know, full advance levels. If that's what you wish to do right?
30
00:04:55.510 --> 00:05:00.109
Ryan Dymek: So share those results. Just to kind of see that layout here for everybody.
31
00:05:00.960 --> 00:05:03.149
Ryan Dymek: Alright. And then we've got.
32
00:05:03.780 --> 00:05:06.230
Ryan Dymek: Let's see here, if you've
33
00:05:08.020 --> 00:05:09.700
Ryan Dymek: let's see, you're actually
34
00:05:11.140 --> 00:05:14.349
Ryan Dymek: okay. Let me ask you this question. This is actually a great one.
35
00:05:15.270 --> 00:05:20.369
Ryan Dymek: So one more poll here. Does your organization currently have a strategic plan for integrating
36
00:05:20.380 --> 00:05:23.070
Ryan Dymek: are enhancing AI ml technologies?
37
00:05:26.290 --> 00:05:28.099
Ryan Dymek: we'll see initial answers
38
00:05:28.130 --> 00:05:30.139
Ryan Dymek: and very heavily weighted on yes.
39
00:05:30.540 --> 00:05:31.770
Ryan Dymek: not surprising.
40
00:05:31.810 --> 00:05:35.309
Ryan Dymek: This has become extremely popular just in the last year. Right?
41
00:05:37.140 --> 00:05:40.558
Ryan Dymek: is. This is really emerging. I will. I will say
42
00:05:41.350 --> 00:05:42.240
Ryan Dymek: that
43
00:05:43.450 --> 00:05:49.019
Ryan Dymek: AI is not is not new. It's it's more recently become
44
00:05:49.150 --> 00:05:54.210
Ryan Dymek: more apparent and more kind of in our face. But it is not new. Okay.
45
00:05:54.280 --> 00:06:03.319
Ryan Dymek: but we're gonna we're gonna talk about this. A lot of companies have just become more aware of the fact that they kind of need to be using it. If you're not using AI in your industry and your business.
46
00:06:03.660 --> 00:06:25.419
Ryan Dymek: your competitors are okay. And and yeah, so here's the thing as far as yeah, as far as being out of the job. Yeah, that's a tricky one, right? I'm not gonna get too too far into that one. That's a that's a tricky one. But honestly, the whole world has. This has this potential challenge, because now we're getting to the point where, quite frankly, AI can do everything a human can do for the most part.
47
00:06:25.727 --> 00:06:44.959
Ryan Dymek: So there is a. There is a whole ethical discussion around that that people are talking about just because we can, doesn't mean we should. But there's a lot of things, too. It allows us to be more creative or more kind of the bigger the bigger picture, and then kind of direct the AI to do what we needed to do to accomplish. What we need to accomplish. Right?
48
00:06:45.610 --> 00:06:46.470
Ryan Dymek: Yeah.
49
00:06:47.240 --> 00:06:50.429
Ryan Dymek: exactly. Rand ran deep, actually
50
00:06:50.530 --> 00:07:07.149
Ryan Dymek: touched this perfectly. This is kind of where I was headed is AI won't necessarily take your job. It's those that don't know how to leverage AI that will potentially have a challenge here, right? But I don't wanna get too far into that. That's a that's a whole nother discussion beyond the scope of today's course. Right? Today's discussions.
51
00:07:08.570 --> 00:07:21.659
Ryan Dymek: yeah, is it? Is it necessary to learn AI and Ml, for everyone? No learning it? So there's different degrees of learning this stuff right and I don't wanna get too far into the QA. Just yet. We'll kind of have some conversations, and we'll get back to some of this stuff.
52
00:07:21.830 --> 00:07:38.240
Ryan Dymek: But what are we talking about learning? Well, there's one side of it as being a user of AI. And I would argue, a lot of people need to learn how to do that and basically prompt. AI ask it what to do, what to produce right. And there's like a whole skill in
53
00:07:38.410 --> 00:07:46.430
Ryan Dymek: what we call prompts in prompt creation. There's a whole skill in that. And then there's the other side of it, the people actually creating the AI
54
00:07:46.540 --> 00:07:56.499
Ryan Dymek: right? And so in that regard, when we say No, AI, not everybody needs to know how to create the AI, that's actually a very niche kind of skill.
55
00:07:57.055 --> 00:08:20.044
Ryan Dymek: But I more and more people are needing to actually learn how to use the AI as a user know how to prompt it and leverage it. Because if you don't, it's not that you not that you can't do your job. It's just that the person next to you is gonna do it 10 times faster and 10 times better, right? Because they're using the tools at their disposal. Okay, that's how I see this stuff is is AI is now it's become a tool for us.
56
00:08:20.480 --> 00:08:32.090
Ryan Dymek: most people remember when you go through school. Right? Let's let's say when you're when you're a child, and you're just learning how to do math right? You're just learning how to add and add and subtract, multiply, and divide, and do all these different things.
57
00:08:32.330 --> 00:08:34.749
Ryan Dymek: And then one day they introduce a calculator to you.
58
00:08:35.039 --> 00:08:41.069
Ryan Dymek: and once you're allowed to use the calculator, your brain just kind of goes. Oh, my gosh! I wish I could have been using this the whole time.
59
00:08:41.429 --> 00:08:50.159
Ryan Dymek: Well, it would be silly to go through life insisting that you never use that calculator go through life, insisting that we're just going to do it by hand.
60
00:08:50.180 --> 00:09:03.599
Ryan Dymek: There's nothing stopping you from doing that math by hand. But you'll fall way behind if you're somebody in the math business. It would be silly for you to do that, because your competition will just blow you out of the water right at that point.
61
00:09:03.874 --> 00:09:22.789
Ryan Dymek: You, you really need to focus on using the tools at your at your disposal. And that's what AI has begun become now is, it's even writing programming code for people. But it doesn't mean it's writing entire applications, right? We still have to direct it to kind of write these code snippets and these chunks, but it speeds up development efforts like crazy. And that's just one example.
62
00:09:23.475 --> 00:09:38.249
Ryan Dymek: If you're a content writer. If you're a journalist, you can be using these things to help. Kinda maybe improve your writing style, or kinda get like a baseline in the place, and then you can go through and and and modify it for your own use. Right?
63
00:09:38.880 --> 00:09:39.870
Ryan Dymek: Okay.
64
00:09:40.090 --> 00:09:46.349
Ryan Dymek: so just kind of show the results in this right? So it looks like about 71% of attendees.
65
00:09:46.750 --> 00:09:53.060
Ryan Dymek: Their organization currently has a strategic plan here. And then another 23% just don't know.
66
00:09:53.090 --> 00:09:55.870
Ryan Dymek: So chances are I, I would
67
00:09:56.280 --> 00:10:11.489
Ryan Dymek: be willing to guess that a large percentage of the people that don't know it's probably a yes, this is just something more. And more companies are realizing they kinda have to. They kinda have to, at least dabble in this because their competitors are
68
00:10:11.650 --> 00:10:17.580
Ryan Dymek: right. And so, anyhow, all right, that's fantastic. All right, I'm gonna go ahead and stop sharing the polls
69
00:10:18.070 --> 00:10:19.400
Ryan Dymek: and
70
00:10:21.590 --> 00:10:25.190
Ryan Dymek: risk mitigation strategies in place for AI. Ml.
71
00:10:25.210 --> 00:10:48.009
Ryan Dymek: yeah, that's a that's a great question. Yes, there's a ton of security around all of this. In fact, again, we're using A and Ml a lot of times for security. But as far as yeah, risk mitigation, risk protection. There's a lot of discussions around that. So the question is, how much you just let it go wild. Right? Most AI platforms today are not built that way. They're not built to just kind of
72
00:10:48.010 --> 00:11:04.689
Ryan Dymek: be open, ended. We leverage it just like any other tool. At the end of the day. We are probably still directing out on exactly how we want to use it and leverage it within our environments. It's not just open-ended. So there's a whole lot around mitigation of risk for sure.
73
00:11:04.690 --> 00:11:10.680
Ryan Dymek: With that said, though, you could also argue the flip side of it. There's also a lots of risk in not using it
74
00:11:10.760 --> 00:11:13.930
Ryan Dymek: right? And so there's discussions on all of that. Okay.
75
00:11:14.270 --> 00:11:15.710
Ryan Dymek: so let's just talk.
76
00:11:15.720 --> 00:11:45.180
Ryan Dymek: Let's kinda get into this. Let's talk about the big picture. First off. This was an old quote by from from a Space Odyssey. I don't think anybody remembers this right. Any sufficiently advanced technology is indistinguishable from magic. And this is kind of the way it's starting to appear right. People that don't work in this landscape or seeing AI, and they're going. Oh, my gosh! What can't it do? And you're seeing it pop up everywhere. If you've got a smartphone is popping up on your smartphone, you're doing Google searches. It's popping up in your Google searches, and so on. Right?
77
00:11:47.390 --> 00:12:13.280
Ryan Dymek: alright. So let's talk about understanding traditional AI and Ml, so when I say traditional, I wanna under. I want you to understand first off, that AI is not new. In fact, a lot of these theories and concepts actually started all the way back in late in like the Mid to late 19 forties, John Mccarthy, an individual back in 1955 actually coined the term artificial intelligence.
78
00:12:13.380 --> 00:12:35.400
Ryan Dymek: and a lot of the theory that was developed then was way ahead of its time meaning there was all this mathematical theory, but they didn't have the computer systems to actually, you know, actually invoke that theory completely. And so some of the stuff we're doing today is actually still a derivative from that era which is kind of crazy. But also
79
00:12:35.400 --> 00:12:56.159
Ryan Dymek: there's a lot today that doesn't look even close to what was envisioned back in 1,955. And so. But just I wanna point out, this is again, this is not new stuff. We're just evolving. And what's gonna happen with AI is it involves itself it. It evolves itself faster and faster. Right? Because we can use AI to improve AI.
80
00:12:56.510 --> 00:13:00.119
Ryan Dymek: And so this just is an accelerating thing. And that's why, in the last.
81
00:13:00.180 --> 00:13:03.770
Ryan Dymek: like a year or 2, this has gotten so big, so fast.
82
00:13:04.000 --> 00:13:31.350
Ryan Dymek: So there's these different types of artificial intelligence. We have what we call classical based programming. Basically, it's you, you write fixed rules. So you have a programmer and a programmer. Writes these rules that says, in this condition or this situation do this right? And it's very responsive or reactive to the thing the thing it sees and based on those rules that have been defined. So a simple example as an example to this.
83
00:13:31.580 --> 00:14:00.979
Ryan Dymek: as the old test that used to be done years ago around playing chess. This was this was a thing I don't remember the exact timeframe on this, but a computer playing a human being in chess. You take some of the top chess masters in the world, and it actually took a while for the computer to beat these experts if you will. And it was all based on rules. And so basically, that meant it knew the system, knew the rules of the game, and it kind of knew you'd hard code.
84
00:14:00.980 --> 00:14:08.049
Ryan Dymek: hard program these situations. So basically, if the board looked like this. Here's your next, probably best move
85
00:14:08.090 --> 00:14:08.900
Ryan Dymek: right?
86
00:14:09.400 --> 00:14:10.640
Ryan Dymek: And oh.
87
00:14:12.050 --> 00:14:17.069
Ryan Dymek: traditional. Thank you for that. Yeah, I was. I missed that one. Thank you so much, Paul.
88
00:14:17.630 --> 00:14:26.359
Ryan Dymek: I should have used AI to check that. I'm not a good speller. So in any event.
89
00:14:26.630 --> 00:14:29.213
Ryan Dymek: the AI world is
90
00:14:30.170 --> 00:14:35.570
Ryan Dymek: started with just these hard-coded rules. Okay? And so
91
00:14:35.680 --> 00:14:56.330
Ryan Dymek: the the rules are just hard programmed. And the problem is, it wasn't very dynamic, right? And so it had to be programmed. Rule after rule after rule. Hey? You see this situation do this move. You see this situation do this move. And it was a very kind of static thing, and it wasn't until enough rules were developed that it could get good enough
92
00:14:56.330 --> 00:15:18.010
Ryan Dymek: or smart enough to actually beat the players. Okay, this, this actually becomes a problem in itself, too, because the rules become kind of endless, in that it gets bigger and more and more bloated. So now you have not just 10 rules. You have a hundred rules. You have a thousand rules, tens of thousands of rules, and it just weighs the system down. It gets very slow and inefficient.
93
00:15:18.390 --> 00:15:35.359
Ryan Dymek: And so in that regard, the rules really kind of had some limitations. Right? So then, enters the machine learning landscape. The machine learning landscape is quite interesting in that
94
00:15:35.610 --> 00:15:42.210
Ryan Dymek: machine machine. Learning actually allows it to derive the rules from the data.
95
00:15:42.220 --> 00:15:53.869
Ryan Dymek: Okay, so this is where you you kind of choose things in the data that you believe are important. So a human being will actually choose important elements.
96
00:15:53.870 --> 00:16:15.240
Ryan Dymek: Most basic example, let's say, we're trying to build some sort of system that will detect whether something is a car or an airplane or a motorcycle. Okay, those 3 things. And so so those problems are are usually done by us selecting features, and so in that case, I might say, well, a car has 4 wheels. It's got a windshield. It's got some headlights.
97
00:16:15.600 --> 00:16:27.350
Ryan Dymek: It's got a pair of headlights and and so on. Right then you've got an airplane, has some wings right? It's got a tail, and then, of course, it it may have 2 wheels, 3 wheels
98
00:16:27.680 --> 00:16:32.470
Ryan Dymek: depending, and then it's got a propeller potentially right, or a jet engine.
99
00:16:32.590 --> 00:16:54.090
Ryan Dymek: and then, of course, a motorcycle, 2 wheels, and obviously an engine, a headlight, and and some other things along those lines, perhaps. Okay, maybe. So. My point is, you kind of call those features out we, as human beings in machine learning would actually call those features out and say, Here are the things that I believe are important to make it
100
00:16:54.110 --> 00:16:56.340
Ryan Dymek: identify. What's what?
101
00:16:56.470 --> 00:17:20.189
Ryan Dymek: Okay? So when we do this concept of training the system, the training of the environment is kind of like, I'm gonna show you a flashcard of a car. I'm gonna show you another one of a car, another picture of a car, another picture of a car. And basically eventually you start to kind of notice the differences and the similarities. And so the idea is with machine learning. We kind of call out these features. And then what can happen is
102
00:17:20.810 --> 00:17:24.309
Ryan Dymek: something new can be introduced that it's never seen before.
103
00:17:24.470 --> 00:17:29.129
Ryan Dymek: and it's got enough features that it can say. You know what I believe. That's a car.
104
00:17:29.670 --> 00:17:32.149
Ryan Dymek: That's the general idea of machine learning.
105
00:17:32.190 --> 00:17:50.329
Ryan Dymek: And so it's about this kind of algorithmic kind of approach along with basically probabilities is what all of this stuff comes down to probabilities. But basically this whole idea is, human beings had to provide those those features. The problem with
106
00:17:50.330 --> 00:18:07.309
Ryan Dymek: providing features is that in and of itself. Sometimes we don't know what to provide. We don't necessarily know what makes something unique. And so sometimes that's a challenge. And so there's another type of machine learning where you could do like grouping or clustering, as they they called it. So
107
00:18:07.460 --> 00:18:09.540
Ryan Dymek: basically, this idea of
108
00:18:10.038 --> 00:18:12.549
Ryan Dymek: of, let's say, I showed you
109
00:18:12.560 --> 00:18:24.979
Ryan Dymek: stacks. I showed you a bunch of pictures, and again car airplane and a motorcycle, and those are the 3 possible options, and we got stacks of a couple 100 pictures, and I just say I say to you, group them.
110
00:18:25.220 --> 00:18:47.429
Ryan Dymek: and so you may not know the name of something you may not know how to call that a car, or an airplane, or a motorcycle, but you might end up with 3 separate piles of things that look similar, and naturally, it might be a plane pile, a car and a motorcycle pile. And that's without me giving you any any input on that. So machine learning had that ability as well.
111
00:18:47.560 --> 00:19:08.680
Ryan Dymek: But really it was about kind of like categorizing things and and identifying, you know where to place that thing in a category that was that was most of machine learning. So then came on the scene this concept of deep learning. This is where things got really interesting. And this. This happened around 2,012,
112
00:19:08.950 --> 00:19:11.640
Ryan Dymek: where in 2012?
113
00:19:12.032 --> 00:19:41.409
Ryan Dymek: The deep learning stuff really kind of took off 2010 to 2012 deep. Learning is not is not from from that era, in fact, it actually was developed much longer before. But our hardware couldn't keep up. There was what we call a neural winter, where basically our hardware wasn't good enough to do what deep learning could do. And when you required these like really high end pieces of hardware, but around 2010 to 2012 the hardware became available, and we got a whole new spur in AI
114
00:19:41.840 --> 00:20:08.559
Ryan Dymek: specifically deep learning what deep learning does. And what makes it different from machine learning is that it? It allows the technology to actually derive its own features from the data and the its own rules from the data. Okay? And so in that regard, if I just showed it a picture of a car, it would pull out the features and the edges and the corners, and all these little things, and it may derive thousands of features about a car, making it far more accurate.
115
00:20:08.560 --> 00:20:23.230
Ryan Dymek: and it would kind of identify those things that make a car a car. Okay, I don't know if anybody here has ever used like facial recognition. Let's say on their phone. Let's let's say you have your phone or your desktop. And you're using it to log in.
116
00:20:23.250 --> 00:20:35.869
Ryan Dymek: Okay, if if any of you use those types of features well on on your phone or on your computer when you go to login with that, what it does is when you when you train it when you teach it what your face is.
117
00:20:36.200 --> 00:20:43.390
Ryan Dymek: It's identifying features of your face, but it's not like, Hey, you have 2 eyes and a nose and a mouth.
118
00:20:43.460 --> 00:20:52.300
Ryan Dymek: It's actually looking at every little freckle, every little bump, everything. And there might actually be thousands of features that make your face yours.
119
00:20:52.350 --> 00:21:07.310
Ryan Dymek: And this is what's interesting is back when we were really dealing with with Covid at full scale, and everybody was kind of isolated. I I had a case where I was logging into my my computer and I had a mask on.
120
00:21:07.680 --> 00:21:37.609
Ryan Dymek: I had a hat on at the time. I was just doing work on my own kind of behind the scene. So I had a hat. I had a mask, and I had my glasses on, and it was still able to recognize that it was me. And really it's because of, you know, specific wrinkles in my face. And and you know little spots that enough of those were still identifiable that it could say with with confidence, that, Yeah, that's that's that person. Okay? And so, instead of a human being calling out, saying, Here are the features that you have to identify. The system identifies those features itself
121
00:21:37.700 --> 00:21:40.470
Ryan Dymek: and actually learns to learns to do that.
122
00:21:41.160 --> 00:21:41.910
Ryan Dymek: Okay?
123
00:21:42.940 --> 00:21:44.170
Ryan Dymek: Oh, my gosh.
124
00:21:45.760 --> 00:21:58.310
Ryan Dymek: dinesh, that's crazy. Yeah, that's not great. Right? Twins. Yeah, that's that's a tricky one, right? Because facial identification. It's until it gets down to like the retina or whatever. I'm sure there's probably still things you could do that's still unique. But
125
00:21:58.400 --> 00:22:04.229
Ryan Dymek: I would guess, though, if somebody has a twin, maybe they should opt to not use facial recognition
126
00:22:04.450 --> 00:22:31.409
Ryan Dymek: for for authentication. It's up to them. But very interesting. It's kinda crazy. So but yeah, that's the whole thing. Is it identifies that. Now I would also argue, that's not necessarily anything saying about AI itself. That's about that implementation of AI. There are ways to basically adjust that threshold of to how precise it needs to be before it accepts who you are. And so every single implementation is different on that, whether you're using like a
127
00:22:31.410 --> 00:22:39.340
Ryan Dymek: Microsoft laptop and logging in with your face, using an iphone logging in with your face logging in with a
128
00:22:39.440 --> 00:22:56.409
Ryan Dymek: an android phone, whatever it might be. They're all gonna implement a little bit differently and have different thresholds and different standards. Right now. Obviously, if you're working at some top secret facility and they're doing facial recognition. They're probably gonna have a higher level of requirement to pass by that right?
129
00:22:56.911 --> 00:23:13.610
Ryan Dymek: So yeah. So no. So it's more than that deep learning that I was using that as an example, with things like facial recognition and image recognition. Those are just simple examples. But the point is, it recognizes. It learns what is important to identify things and to to do work.
130
00:23:13.980 --> 00:23:18.899
Ryan Dymek: And so it's actually deriving that itself. It's not relying on the human beings to say so.
131
00:23:18.990 --> 00:23:40.199
Ryan Dymek: Right? One of the challenges that we've always had with machine, with traditional machine learning, too, is biases that get introduced into the system where the programmer, if you will programs the machine learning right and the programmer or the data scientist or somebody is saying, here's what we believe to be important in making these decisions.
132
00:23:40.200 --> 00:23:49.320
Ryan Dymek: The problem with that is, now you have human biases in this whole thing that can lean it. You can kind of create a bent, and and that's not good
133
00:23:49.614 --> 00:24:11.969
Ryan Dymek: deep learning may still be somewhat subject to it, based on the algorithms and things, but for the most part it kind of reduces that that biasy, because at that point deep learning is actually deriving that feature itself. Right? It's figuring that stuff out for itself. I'll give you some really good examples of kind of comparing these. In a more traditional sense. Talking chess the chess example here in just a minute
134
00:24:12.060 --> 00:24:37.559
Ryan Dymek: generative. AI, however, is a form of deep learning. So all of these things are cascading, classical, based programming. Or just general AI encompasses all of these things. I should say so. General AI first stage would be classical based programming, then deep learning is is kind of the deeper stage, and then deep learning, even more precise, more specific, and maybe requires more hardware and more resources.
135
00:24:37.560 --> 00:24:57.199
Ryan Dymek: And then generative AI is actually a form of deep learning. So that's where all of these things, kind of like are are within each other. Okay. Generative. AI is a special type of deep learning that actually came about kind of based on a lot of principles that were discussed in this 2,017 white paper. Attention is all you need. You can go look that up.
136
00:24:57.310 --> 00:25:10.249
Ryan Dymek: And some really interesting information in that. There's blogs written about it, and stuff, too, if you don't wanna actually read the white paper itself. But I basically kind of just laid this out just to kind of have a visual here. Right? So we have artificial intelligence.
137
00:25:10.580 --> 00:25:13.119
Ryan Dymek: and we have features that are derived from the data.
138
00:25:13.761 --> 00:25:37.930
Ryan Dymek: oops, actually, I meant so that here, okay, I'm sorry. So overall artificial intelligence. That was actually a mistake. Artificial intelligence here. Inside of that we've got classical programming, which I didn't actually have a chunk. For then we have machine learning. And inside of. And so in machine learning, humans supply the features. So this is what I was talking about. Where we choose the features, the system derives the rules. So let's put this into context around.
139
00:25:38.644 --> 00:25:41.850
Ryan Dymek: Around chess. Okay? So with chess.
140
00:25:41.880 --> 00:25:48.029
Ryan Dymek: we've got the classical programming model where we would write all these rules. Okay.
141
00:25:48.190 --> 00:26:17.250
Ryan Dymek: well, what we do with machine learning in that concept is, you get a record of a bunch of past played chess matches. So maybe you have every every move in the form of data that has been recorded, and maybe you supply the system, thousands of chess games. And so what it does is it learns the rules and the rules being what moves to make, or the how, the how the pieces move on the board.
142
00:26:17.510 --> 00:26:28.159
Ryan Dymek: and it kind of derives a lot of that stuff from that data. So you just supplied a bunch of games that's been played, and it can actually learn how to win the game based on those past games.
143
00:26:28.200 --> 00:26:47.539
Ryan Dymek: Okay, the problem is, we do have a limitation on how many past games have been played there. There is some finite amount there, right? And then, as far as the the human supplying the features we're talking about. What do we think is important in identifying what moves to make right? And and so that's the features with deep learning.
144
00:26:49.620 --> 00:26:51.460
Ryan Dymek: I can get this to advance properly
145
00:26:51.600 --> 00:26:55.699
Ryan Dymek: with deep learning. We have this idea where it
146
00:26:55.910 --> 00:26:58.459
Ryan Dymek: derives the features from the data
147
00:26:58.480 --> 00:27:12.849
Ryan Dymek: and the rules as well. And so this is an interesting thing, because what it can do here is deep learning can actually play its own games and see the results. So we really kind of focus more on the outcome we want
148
00:27:12.970 --> 00:27:20.059
Ryan Dymek: and allow it to iterate on its own. So now, instead of feeding it, let's say 10,000 past chess games.
149
00:27:20.150 --> 00:27:29.000
Ryan Dymek: It could play its own chess games against itself to see and learn what those moves do and accomplish, and now it could. It could
150
00:27:29.170 --> 00:27:30.470
Ryan Dymek: train off of
151
00:27:30.560 --> 00:27:36.279
Ryan Dymek: hundreds of millions of games. If it was right, it could actually just play those things over and over and over again.
152
00:27:36.580 --> 00:28:02.429
Ryan Dymek: And so it's it's basically simulating. Right? It's able to make these simulations and see what the outcomes are and learn what the best moves are. That's what deep learning would essentially be okay. Now, then, where generative AI actually comes into the mix is this is truly able to create new original content. Okay, so deep learning was always based deep learning and machine learning. We're always kind of based on historical data.
153
00:28:02.620 --> 00:28:23.079
Ryan Dymek: a lot of times deep learning, deep learning, not necessarily historical data. But I still have to supply it. Some sort of inputs, a lot more. And and it wouldn't necessarily create new content. It would come up with ways to beat the game right? So a lot of times deep learning and machine learning were used for things like, you know
154
00:28:23.180 --> 00:28:37.509
Ryan Dymek: what what to do next. Single prediction or categorizing things right? Identifying fraud, all of those types of things. But generative AI takes us into a whole new level.
155
00:28:37.720 --> 00:28:55.330
Ryan Dymek: where we actually generate new, completely new content. Okay? And so the way this generally works is, it's got a an element that generates the content. And it puts it up against an engine that is analyzing past, content to see if it kind of matches these rules that have been derived right? So you kind of like
156
00:28:55.360 --> 00:29:11.550
Ryan Dymek: looks at the original. It looks at various original works which could be many petabytes of information or exabytes even of data. So this is where there's massive massive amounts of data available to us. And it can kind of generate new work based on that data still.
157
00:29:11.560 --> 00:29:16.150
Ryan Dymek: But what it's doing is it's learning from that data and generating new things completely.
158
00:29:16.320 --> 00:29:23.330
Ryan Dymek: Those new things might resemble what it learned right? It's going to just like if you did right. If you went to.
159
00:29:23.600 --> 00:29:39.750
Ryan Dymek: you went to a class, and and and you were taught some something. Your your your knowledge is going to be based on what you were, what you learned, what you were presented. Right? So it's very common that you you may have some level of seemingly seeming to repeat
160
00:29:39.790 --> 00:30:00.709
Ryan Dymek: something, but you're not just regurgitating what was told to you. Necessarily, you might be reframing it in your own words. That's kind of the way generative AI tends to do what it does, and the more data it has, the more it can produce unique works. Some of you may not realize this, but a couple of years back AI was able to win an art contest
161
00:30:00.760 --> 00:30:04.319
Ryan Dymek: where nobody knew it was AI derived art.
162
00:30:04.470 --> 00:30:05.290
Ryan Dymek: and
163
00:30:07.090 --> 00:30:16.380
Ryan Dymek: And it kind of ruffled some feathers. But it was quite interesting that AI was able to actually be artistic, and that was a big thing, because
164
00:30:17.730 --> 00:30:27.280
Ryan Dymek: That's always something I always thought right. For years I always thought, well, we'll leave the art, the artistic stuff, the creative stuff up to the human beings. And we'll do the repetitive work for computers. Right?
165
00:30:28.280 --> 00:30:37.300
Ryan Dymek: So this is this is kind of crazy. Right? We're actually now at a point where we can create new content. Now, this new content could be text. It could be video, it could be images.
166
00:30:37.700 --> 00:30:58.480
Ryan Dymek: I mean, you know, there's there's all these like deep fake videos circulating the social media. And there's there's obviously big problems along with some of this stuff, too. Right? So there's new challenges that are emerging. And I don't know if we have all the answers yet. Right? And it might be a while before we totally do. But bottom line the world as a whole is leveraging this now.
167
00:30:58.480 --> 00:31:14.779
Ryan Dymek: Right? So this is just kind of the big picture about kind of where it all fits generative. AI is kind of looking at massive, massive amounts of Internet data as an example and private data sets that have been supplied to it. So we've got all this information being supplied, and is learning from all of that
168
00:31:14.980 --> 00:31:22.949
Ryan Dymek: everything from medical, I mean, it can learn from medical documents. It can learn from legal documents. It can learn from blog posts.
169
00:31:23.010 --> 00:31:33.469
Ryan Dymek: It could learn from current trends on things, you know, images and styles. I mean, there's products out there now to
170
00:31:34.160 --> 00:31:36.330
Ryan Dymek: so make your own.
171
00:31:36.510 --> 00:31:37.390
Ryan Dymek: I
172
00:31:37.860 --> 00:31:42.510
Ryan Dymek: imagery for some personal image you're trying to create marketing content. All sorts of things. Right?
173
00:31:45.060 --> 00:32:10.739
Ryan Dymek: Yes. So that's exactly it, dinesh, that genai does use deep learning. That is absolutely true. That is, that is why I kind of embedded it that way. Right in this kind of outline. All of this is artificial intelligence. Artificial intelligence can mean a lot of things. Generically speaking, then you have machine learning, which is part of that. And then deep learning. Deep learning is a form of advanced machine learning. Generative AI is a form of advanced deep learning.
174
00:32:11.190 --> 00:32:19.710
Ryan Dymek: Okay, so they're all kind of stages as you get deeper into this discussion. Right? So, yes, at its core, Gen. AI is deep learning. Okay?
175
00:32:20.280 --> 00:32:25.359
Ryan Dymek: So again, in this case, it's a subset of AI technologies. Again, deep learning as we talked about.
176
00:32:25.610 --> 00:32:30.679
Ryan Dymek: And again, you can do everything. I mean, it can make music. It can. It can make art. It can make
177
00:32:31.702 --> 00:32:34.710
Ryan Dymek: creative works, writings.
178
00:32:34.820 --> 00:32:46.149
Ryan Dymek: I can. It can make screens. I have a I have a friend of mine that's actually been working on screenwriting. And he's actually, you know, been in the film industry for a couple of decades.
179
00:32:46.150 --> 00:33:14.949
Ryan Dymek: And he's actually coming up with a script, and he's using AI to help him come up with new thoughts and new ideas and kind of plot lines and all kinds of crazy things. And it's actually writing a bunch of the script, I mean, not not all of it, right? I mean, it's kinda giving him like a base based on what he tells it to do. Mind you, right? So it's still kind of very interactive. It's not just straight up. Just make me a movie or make me a a script. You know it's not that simple. He he still has to interact with it and and produce a lot of this, but it saves him hours of handwriting, a lot of this stuff.
180
00:33:14.950 --> 00:33:27.079
Ryan Dymek: and he can go in and kind of add his own flavor to it and his own feel and still make it his. But he's using these tools to really kind of advance that so lots of things you can do with this.
181
00:33:27.350 --> 00:33:43.760
Ryan Dymek: Okay, we talk about discriminative AI models. That's kind of like your traditional Ml, and things like that basically classify or predict outcomes. That's that's kind of been around. We've been doing that for some time. As I mentioned, I got kind of into the Ml. Landscape specifically
182
00:33:44.020 --> 00:33:45.580
Ryan Dymek: because of
183
00:33:45.978 --> 00:34:14.711
Ryan Dymek: of security, right? And so we wanted to be able to predict things like everything from predicting potential fraud with banking customers. Mine, where we're where we're predicting whether or not a particular transaction might be fraudulent, right? Or or so that's really kind of yes or no. Yes, this is fraud or no, it's not okay, or putting something in a group, applying it to a group labeling images, saying, this image is a scene outdoors. It's got
184
00:34:15.050 --> 00:34:19.850
Ryan Dymek: it's daytime. It's got a a bird in it. It's got a human in it.
185
00:34:19.850 --> 00:34:32.010
Ryan Dymek: It's got a ball, you know, whatever. So again, I do like object, recognition and things. But really, if you think about it, each object is its own thing. And so it's really just identifying that one thing, it's not doing anything creative.
186
00:34:32.010 --> 00:34:44.510
Ryan Dymek: And that's stuff that we've been doing for deaf for a while now, probably going on a good decade. We've been having no problem doing that about 2,015. It's a little bit a little bit further back, a little bit more history here.
187
00:34:44.750 --> 00:34:47.860
Ryan Dymek: and 2015 approximately, was a breaking point
188
00:34:47.980 --> 00:34:50.920
Ryan Dymek: where AI, specifically deep learning
189
00:34:50.940 --> 00:35:07.569
Ryan Dymek: attributed to. There's this, there's this test that happens. It's been going on every year more recently. I don't know if it's continued to happen, but it went every year for quite a few years. Called image net, and you can go look it up. But basically the idea was, you have a million images.
190
00:35:07.820 --> 00:35:11.679
Ryan Dymek: and there would be a thousand possible
191
00:35:12.505 --> 00:35:20.310
Ryan Dymek: categories that could be chosen, and in some cases you might have images that have many things in a single image of those 1,000,
192
00:35:20.810 --> 00:35:23.609
Ryan Dymek: and they would compare human beings to AI,
193
00:35:24.130 --> 00:35:31.909
Ryan Dymek: and at about 2,015 AI. Surpassed the accuracy rate of the human beings, and it did it in a fraction of the time.
194
00:35:31.990 --> 00:35:34.129
Ryan Dymek: So that was the breakpoint. 2015.
195
00:35:34.220 --> 00:35:40.440
Ryan Dymek: And so this is where things started really accelerating right? Higher accuracy, higher accuracy, and faster.
196
00:35:40.850 --> 00:35:46.099
Ryan Dymek: Why, why would you have human beings going through images and saying, That's that. That's that doesn't make any sense.
197
00:35:48.530 --> 00:35:50.890
Ryan Dymek: So yeah, it's a.
198
00:35:51.090 --> 00:35:52.840
Ryan Dymek: I'm just looking at the chat here.
199
00:35:53.490 --> 00:36:14.390
Ryan Dymek: Generate AI method of using deep learning as its engine produces. Yeah. So it's it's all about algorithms and and predictability and stuff. So if you think about it, the way it's generating new work, it's still looking at past data to do that a lot of times but what it does is it understands, you know. Here are all the different imagies and different things that it might
200
00:36:14.420 --> 00:36:21.890
Ryan Dymek: be based off of but a lot of times. If I were to equate this to something it's a little easier to talk about if I equate this to something like text
201
00:36:22.140 --> 00:36:40.479
Ryan Dymek: as it's writing a novel or something, it has to look back on what it's previously written, and make predictions about what should come right. So word after word after word, or even thoughts that need to occur. And so on. And so it's all algorithmic based at the end of the day. Right? Computers aren't. Aren't magic.
202
00:36:40.480 --> 00:36:54.390
Ryan Dymek: even though that quote from the space Odyssey would kind of say otherwise, right? It's not actual magic. There. There is an algorithm being applied here. And so these algorithms have really enhanced things over the years. And so it's basically able to circle back
203
00:36:54.400 --> 00:37:03.050
Ryan Dymek: and look at what's previously been written to kind of understand context and context clues and things like that and project what should come next?
204
00:37:03.320 --> 00:37:32.760
Ryan Dymek: That's the general idea of what it's doing. Okay, there, we can get into the really deep dives if we wanted to. But that's not. That's not for this session. Okay? And so it doesn't need to be trained on it. Necessarily, there is a training process. There is a whole training process. But then new stuff that comes in it can actually analyze new data and predict new things. And that's the thing about generative AI, too, is you're able to kind of provide it information. So the base training. Let me explain what I mean by training here, and we'll talk a little more about this.
205
00:37:33.020 --> 00:37:39.360
Ryan Dymek: But I can train on a bunch of data as a foundation. In fact, we have what we call these foundation models. We'll show you a little later.
206
00:37:39.400 --> 00:37:54.529
Ryan Dymek: But foundation models are basically think of them as like programmers, libraries. I don't know if whoever here might be a programmer or kind of do anything with that at all. If if we had, if we had to work from scratch on everything we programmed today.
207
00:37:54.804 --> 00:38:17.559
Ryan Dymek: We would be reprogramming things from the, you know, 1960 S. And 1970, S. And 1980 S. It wouldn't make any sense. Well, we have these libraries of the programmers. Right? So this idea of programmers, they have these libraries of things that have already been written decades and decades of work that sometimes all I have to do is just reference. That library, one line in my code, and all that past work is able to be leveraged right?
208
00:38:17.570 --> 00:38:42.260
Ryan Dymek: That's what's going on with Ml, now, that's what's going on with generative AI. And all these things is we're able to actually leverage past models that have been derived or developed. Let's say, a model that's built to identify images might be leveraged to create new images. If that makes sense. It's already been trained on past images and already been trained on. Maybe. You know, unlimited amounts of works of art.
209
00:38:42.260 --> 00:38:52.250
Ryan Dymek: and so on, and use that as a way to now make a new model that actually has that knowledge, and it can now use that to produce new works of art.
210
00:38:52.610 --> 00:39:03.170
Ryan Dymek: That's the general kind of general ideas. We're able to kind of innovate, innovate, one on top of the other. Okay, so it's kind of a stepping stone kind of process. And so
211
00:39:03.460 --> 00:39:11.620
Ryan Dymek: you have these base foundation models, things that are designed to do simple things like text to voice, voice to text.
212
00:39:12.222 --> 00:39:16.299
Ryan Dymek: models that are designed to produce images
213
00:39:16.620 --> 00:39:25.789
Ryan Dymek: from text. So what it does is it first has to understand your text. Once it understands your text, then it's able to actually produce new text right?
214
00:39:25.910 --> 00:39:31.229
Ryan Dymek: And our Annie, I'm sorry, has a raised hand. I'm not sure. Go ahead, actually.
215
00:39:32.110 --> 00:39:37.919
Ryan Dymek: probably don't want to open the mics up too much for 100 and some odd people. You might have to actually ask your question in the
216
00:39:38.400 --> 00:39:41.080
Ryan Dymek: in the chat, or
217
00:39:41.560 --> 00:39:44.140
Ryan Dymek: or in the Q. And A. Actually, if possible.
218
00:39:47.450 --> 00:39:48.250
Ryan Dymek: oops!
219
00:39:49.230 --> 00:39:51.390
Ryan Dymek: Not sure what's going on, actually, was that?
220
00:39:53.870 --> 00:39:55.039
Ryan Dymek: Well, I'm not
221
00:39:55.100 --> 00:40:01.939
Ryan Dymek: sure if I'm missing something here. Okay, anyways, I'm gonna move on. If for any reason there's a problem hearing me, or if there's any challenges, I don't know if this is
222
00:40:01.960 --> 00:40:08.420
Ryan Dymek: everything's working, you can go ahead and mention it in the chat if there's something something going on here. But in the event
223
00:40:08.860 --> 00:40:09.640
Ryan Dymek: I'm
224
00:40:10.570 --> 00:40:24.619
Ryan Dymek: kind of on that on that idea, though, of being able to build our work off of past work. That's that's the whole whole thing. So we don't have to constantly train it. Okay, there are tools that again already understand speech as an example. So I can just talk to it. And
225
00:40:24.650 --> 00:40:32.319
Ryan Dymek: that new model I produce doesn't actually have to leverage that I don't. I don't have to actually train on speech.
226
00:40:32.550 --> 00:40:36.249
Ryan Dymek: I don't have to actually train on voice, because that's already been done.
227
00:40:36.780 --> 00:40:53.749
Ryan Dymek: So now I can maybe incorporate that into a larger application that already understands how to do voice to text. And my programming code can actually allow you to talk to it. And I can now interpret it as as text as text. Input. Those are the kind of ideas like, there's these layers of of models that we're gonna leverage
228
00:40:53.960 --> 00:40:56.470
Ryan Dymek: just quick check in. You can hear me. Okay, just
229
00:40:56.810 --> 00:41:00.430
Ryan Dymek: put something in the chat. If somebody would just make sure audio is working. Okay.
230
00:41:00.560 --> 00:41:04.889
Ryan Dymek: Something looked a little weird to me in the in the zoom interface. I was making sure. Okay.
231
00:41:05.130 --> 00:41:07.509
Axcel ILT: That's great, Ryan. Just by the way, I can hear you great.
232
00:41:07.830 --> 00:41:14.929
Ryan Dymek: Right? Okay, good to hear. Good to hear. Just kind of my zoom interface. Shifted. Did something weird. Okay? So in any event.
233
00:41:15.890 --> 00:41:17.249
Ryan Dymek: all good.
234
00:41:17.360 --> 00:41:33.920
Ryan Dymek: So in this case, gener models, basically, again, can produce completely new forms of work it might resemble. Again, I want to point out, it does resemble or might resemble some previous works. And this is where things get really interesting and sometimes a problem for us just to be aware.
235
00:41:34.170 --> 00:41:46.069
Ryan Dymek: So there was actually a case. I don't know if anybody's followed this or not. I don't have the actual case, the direct case information in front of me here. I didn't. I didn't grab it, but there was a situation where an attorney
236
00:41:46.170 --> 00:41:57.069
Ryan Dymek: quoted a court case. Right? So attorneys do this all the time. They they will. Reference precedent. Right? They'll reference previous cases and decisions. On previous cases.
237
00:41:57.190 --> 00:42:04.859
Ryan Dymek: This attorney had actually used Chat Gpt to actually produce some research. So you can use these tools to research.
238
00:42:04.920 --> 00:42:29.200
Ryan Dymek: and it responded with with case information and case details. And so the attorney actually used that information in the case, turned out, the case never happened. It was completely made up and fictitious. It was what we call hallucinations. So AI is still at this weird stage, where it can have what we call hallucinations. What those hallucinations are from is it? Saw a bunch of past information. And it's
239
00:42:29.340 --> 00:42:33.389
Ryan Dymek: it's kind of predicting. It's making up new creative work.
240
00:42:33.880 --> 00:42:50.130
Ryan Dymek: And so there is a tricky way to use AI in that sense. I might have to tell it to supply me quotes or supply me sources and and things like that, to make sure I can force it to not hallucinate. If that's if I am using it as a research tool as an example. Okay.
241
00:42:50.210 --> 00:42:54.890
Ryan Dymek: so bottom line, we're not at the point yet where we can a hundred percent trust. AI.
242
00:42:54.940 --> 00:43:22.249
Ryan Dymek: Okay? I wanna make sure we're aware of that, that when you go to use these tools, if you go to use Chat Gpt or something, and you do some personal research. You you wanna have it, you know, educate you on something, or whatever you're gonna wanna cross reference and maybe do your own, you know, follow up. But at least it can kinda get. You know, 80% of the way there just know that not everything it says is, gonna be completely true, necessarily because it is creating new pieces of work. Right? That is kind of the whole thing about generative. AI, okay.
243
00:43:22.400 --> 00:43:27.529
Ryan Dymek: yeah, it's hard to say, Paul, do you trust humans? 100%? Right? That's an interesting question.
244
00:43:27.550 --> 00:43:32.669
Ryan Dymek: Because, like, I said, as of 2015, it was more accurate than humans.
245
00:43:32.980 --> 00:43:38.809
Ryan Dymek: Okay? And that was almost a decade ago. Now, if you think about it, or in 2,024,
246
00:43:38.890 --> 00:43:51.680
Ryan Dymek: right? Exactly. I don't trust humans either. Right? So the question is, how much? I mean, at some point we have to put trust into people doing things just like we might have to put trust into these tools? Are they ever gonna be a hundred percent? Probably not.
247
00:43:52.020 --> 00:43:53.510
Ryan Dymek: I will tell you, though.
248
00:43:53.940 --> 00:44:09.800
Ryan Dymek: there are AI models being used that people's lives are in the hands of the model. Right? Everything from when you hop on an airplane right? There's actually AI being leveraged. You know your your electric vehicles that are out there. There's AI being leveraged
249
00:44:09.830 --> 00:44:13.060
Ryan Dymek: to self-driving cars. AI is being leveraged
250
00:44:13.310 --> 00:44:18.189
Ryan Dymek: right? All that stuff. And so when those models actually have to happen.
251
00:44:18.240 --> 00:44:33.880
Ryan Dymek: those obviously the the training process is taken a bit more seriously. I'm not saying we don't take training seriously, but obviously there's there's extra layers of things we can put in place things that we can do to say, look, there's gonna be a different degree of of AI
252
00:44:34.030 --> 00:44:46.569
Ryan Dymek: when I am doing a marketing campaign. If I don't get it perfectly right, I'm okay. And I might want to just get that campaign done and out of the way and move on. Meanwhile, if I'm making something where lives are on the line.
253
00:44:46.570 --> 00:45:06.759
Ryan Dymek: I'm probably not going to accept a very low error rate. Or, excuse me, I'm sorry. Very high error rate. Right? In that case, I want to make sure I'm very, very accurate, and that just means maybe more work has to be put into it. More resources have to be used to do the training, more data to actually make as accurate of a prediction as possible. When it does predict stuff.
254
00:45:06.820 --> 00:45:14.739
Ryan Dymek: As an example we are using. Medical system is using this stuff to help us predict cancers and things like that today. That's very powerful stuff.
255
00:45:14.820 --> 00:45:24.589
Ryan Dymek: But it doesn't mean, we just say, Oh, wait for the system to tell us that somebody has or doesn't have cancer. It just may be a signal to say.
256
00:45:24.640 --> 00:45:33.910
Ryan Dymek: go look further, right? And so that's that's kind of the whole thing. It's not. We're not. Gonna just put it all in one in one model and say, that's it. Okay?
257
00:45:34.070 --> 00:45:40.250
Ryan Dymek: So I'm going to show you some live stuff here. Ok, we're just kind of talking big pictures. And then we're going to do some Demos, we're going to kind of show you some things.
258
00:45:40.390 --> 00:45:47.099
Ryan Dymek: So in this case, these are a couple of the key developments leading to to the Gen. AI environments.
259
00:45:47.929 --> 00:45:54.560
Ryan Dymek: So we have what we call Gns generative advisor advisor networks. I can never say that for some reason.
260
00:45:55.088 --> 00:46:12.910
Ryan Dymek: basically, this, this is the idea of. We have what we call a gemini, a generator and a discriminator. So the idea is 1 one system and another system work in tandem with one another one is generating the content, the other one is discriminating the content and basically looking at it and and comparing it
261
00:46:12.910 --> 00:46:25.270
Ryan Dymek: with real data and saying, this looks like what it should look like as an example. Right? So if you're talking text there's a lot of text generation happening today. Where again, let's say I'm making a
262
00:46:25.667 --> 00:46:48.429
Ryan Dymek: an article, or a white paper, or something. I might use AI to help me generate the bulk of that content. Well, if I'm doing that, does it sound good? Does it sound like a a human wrote? It doesn't sound like a computer, wrote it right? And it used to be. Not that long ago, like just a year or 2 ago, a lot of these generated contents. You could. You could read it and go that sounds like AI.
263
00:46:48.540 --> 00:46:50.760
Ryan Dymek: Okay, today.
264
00:46:50.860 --> 00:47:14.439
Ryan Dymek: you you won't know the difference if if it's a good system. And the prompts were good. And so in that regard part of this is that discriminator comparing against real data and improving itself on a regular basis. It's always getting better, right? It's always learning to say, look. What we generated is not right, and it compare it with real stuff. And it doesn't sound right, doesn't have the same flow. It doesn't, you know, all of those things.
265
00:47:15.140 --> 00:47:16.560
Ryan Dymek: A little note on this
266
00:47:16.760 --> 00:47:23.960
Ryan Dymek: going back over a decade. Now, Google actually has, like an assist, A,
267
00:47:24.020 --> 00:47:35.300
Ryan Dymek: an AI assistant that'll make phone calls for you and book appointments call places and book appointments on your behalf. And people on the other end don't know that they're getting a phone call from a computer.
268
00:47:35.780 --> 00:47:40.050
Ryan Dymek: And what's interesting is it even injects things like pauses and breaths
269
00:47:40.200 --> 00:47:45.260
Ryan Dymek: and ums at times things like that to make it sound more human-like.
270
00:47:45.270 --> 00:47:53.769
Ryan Dymek: And what's crazy is you could be on the receiving end of this. Maybe you're booking a reservation at a restaurant and you answer the phone.
271
00:47:54.170 --> 00:47:59.979
Ryan Dymek: That person answering the phone may be actually talking to Google. And they've been doing that for 10 years now, approximately.
272
00:48:00.120 --> 00:48:08.149
Ryan Dymek: So that technology is not new. And to just think that the stuff that we've seen the last couple of years has amplified that many times over.
273
00:48:09.580 --> 00:48:16.450
Ryan Dymek: So then, we have what we call transformer based models. If you've heard of Chat Gpt the T. In Chat Gpt stands for transformer.
274
00:48:16.470 --> 00:48:22.340
Ryan Dymek: And again, these were actually coming from that white paper back in 2,017 that we mentioned earlier.
275
00:48:22.550 --> 00:48:30.010
Ryan Dymek: So then, we have these concepts of foundation models. And so this is what I was referring to a little bit ago. Foundation model is kind of like a library
276
00:48:30.560 --> 00:48:31.090
Ryan Dymek: for
277
00:48:31.100 --> 00:48:38.880
Ryan Dymek: for AI. And so the idea here is again, you've got models that already know how to do certain things
278
00:48:38.910 --> 00:48:46.449
Ryan Dymek: in a smart, intelligent way, and we can leverage those in our larger applications and not worry about having to
279
00:48:47.113 --> 00:49:01.499
Ryan Dymek: train on that fresh every time. Right? So we're no longer at this phase in AI, where we're having to. Just always, it's all about having the data available to us a lot of times. The data has already been not only made available, but already been trained on.
280
00:49:01.520 --> 00:49:13.080
Ryan Dymek: already been already produced, models to do that work, and then we can incorporate that into a larger environment, and that larger environment might use not just one model. It might use hundreds of models to do what it does
281
00:49:13.190 --> 00:49:14.770
Ryan Dymek: when we talk about a model.
282
00:49:14.990 --> 00:49:29.849
Ryan Dymek: The the word model in the machine learning sense actually means the trained algorithms and the the basically, it's the brain. It's it's like, if you could package up your knowledge into something programmatic.
283
00:49:30.070 --> 00:49:32.670
Ryan Dymek: That's basically what a model is. Okay.
284
00:49:32.960 --> 00:49:50.667
Ryan Dymek: that's kind of the big picture. And so a foundation model are typically used for layers of things. Again, if I've got something, you know, that's that's a voice control. If you if you use like siri on an iphone, or if you use
285
00:49:52.516 --> 00:50:08.659
Ryan Dymek: okay, Google on Android, or Alexa or Google home, or any of those right? Those are voice systems. Well, they actually are using voice to text behind the scenes so that the computer systems receive text instructions which it can work off of a whole lot easier. Right?
286
00:50:12.440 --> 00:50:24.970
Ryan Dymek: J. Sme, just quantify the the acronym, because there's a lot of Smes that I know that could that could mean so clarify that for me. But a model is basically a trained algorithm. It's the knowledge. Right?
287
00:50:25.080 --> 00:50:31.760
Ryan Dymek: So okay, yeah, so that's kind of what I was thinking like a Smee, right? Though, a lot of people know Sme as being a subject matter expert.
288
00:50:32.088 --> 00:50:37.100
Ryan Dymek: By the way, like, I said, that's one of the problems. Is the acronyms today overlap in a lot of areas.
289
00:50:37.669 --> 00:50:55.899
Ryan Dymek: So in the in the Ca, in the case of this, yeah, it's kind of a a kind of equals an Sme, yeah, you could argue that you could argue that it's your subject matter, expert on that thing, right? That thing that it's been trained on. And so when somebody makes as an example voice to text, it's been trained on.
290
00:50:56.150 --> 00:51:00.039
Ryan Dymek: probably hundreds of thousands, if not millions of hours of voice.
291
00:51:00.260 --> 00:51:02.270
Ryan Dymek: right to be able to interpret
292
00:51:02.500 --> 00:51:11.109
Ryan Dymek: voice. And, as you can imagine, take these systems that we're talking to. Let's say you do use, siri, or you do use these things as you're speaking to it and has problems.
293
00:51:11.620 --> 00:51:18.430
Ryan Dymek: These systems behind the scenes are actually able to use that voice interaction to then turn around and learn from it
294
00:51:18.730 --> 00:51:21.960
Ryan Dymek: and understand what it got wrong so that it can get it right later.
295
00:51:22.050 --> 00:51:33.299
Ryan Dymek: So we actually are constantly re-feeding in information to improve these systems. But that's just one piece is, let's say, voice to text, right? That's one layer in that whole mix.
296
00:51:33.350 --> 00:51:58.069
Ryan Dymek: And so that would be layered in with other things like if I if I asked Siri to just play me some music like, what music should it play? Right? Well, maybe it maybe it decides to play me something that it thinks I like right? Well, that's a whole nother model that's different than the voice to text piece. And so there's all these layers. And those layers are a lot of times provided in the form of what we call foundation models. Okay.
297
00:51:58.420 --> 00:52:09.959
Ryan Dymek: but for foundation models. Specifically, when you hear that term a lot of times we are referring to generative AI, which is again used for creating new works of new works, new content of some sort.
298
00:52:11.120 --> 00:52:18.769
Ryan Dymek: And so that's the that's kind of the general, the general idea there. So there's this, there's this product out there called hugging face.
299
00:52:18.920 --> 00:52:24.730
Ryan Dymek: So I didn't actually have a poll set up for this, but I got a question for the for the the group here.
300
00:52:25.620 --> 00:52:30.629
Ryan Dymek: How many of you are familiar, or have used, or even just heard of.
301
00:52:30.660 --> 00:52:32.069
Ryan Dymek: Github.
302
00:52:32.100 --> 00:52:37.940
Ryan Dymek: Github, and I'll I'll put it in the chat, spelled it out just for those of you. Right? Get hub. Okay.
303
00:52:38.576 --> 00:52:54.069
Ryan Dymek: if you've done anything with code, if you've ever written anything as a programmer in today's landscape, you've probably heard of Github. Okay? And yes, I'm sure a lot of you probably use it every single day, or use something like it. Maybe you have your own git repository, or whatever. Okay?
304
00:52:54.200 --> 00:53:04.010
Ryan Dymek: Well, the point is for those for those that aren't maybe familiar with Github. It would be a it's just one of many products, I'm not, you know, advertising, Github, or anything like that. I don't have any ties to them.
305
00:53:04.020 --> 00:53:17.960
Ryan Dymek: but they're a place where I can store my code, my programming code, and I can version control it, meaning, I know, every single modification that's happened along the way I can hook in automation to it. There's all kinds of crazy things you can do with
306
00:53:18.460 --> 00:53:21.140
Ryan Dymek: well, what Github is to code
307
00:53:21.260 --> 00:53:24.239
Ryan Dymek: hugging face is to machine learning models.
308
00:53:24.800 --> 00:53:49.539
Ryan Dymek: So what happens now is now there's an era of machine learning models that have been produced. Again, a model that can again deal with using the most basic example of voice to text. Okay, you don't need to create that. But you need to go find it. You need to go get that model if you want to use it in your application, perhaps. Well, there's now a repository that exists out there. Of these models. Many of them are free.
309
00:53:49.540 --> 00:54:15.960
Ryan Dymek: Some of them are commercial. Okay? So some of them are licensed. Some of them require payment. When I say that it's not the old payment model of I'm gonna I'm gonna fork over a whole bunch of money upfront most of these. If you do have to pay, you're gonna pay to use it, meaning maybe each interaction. So therefore you don't have to come up with, you know $100,000, or or whatever you know, you can just use what you need. Yeah, sure, there you go, blockbuster, that's an old reference. Oh, my gosh.
310
00:54:16.532 --> 00:54:40.519
Ryan Dymek: are you talking about like blockbuster video, the old the old movie place, you can go rent what you need to rent right? Yeah. May maybe maybe in today's terms, maybe it's more like Netflix, right? I can just go. I can just go watch, watch a movie as I want, right? Just go find it on there, and off you go right now, I would say. Maybe Netflix isn't the perfect example, because you've already paid kind of just a monthly subscription fee. But
311
00:54:40.520 --> 00:54:51.129
Ryan Dymek: in in this case you might pay for each model potentially, you know, depending on the model. Different companies make different models in other cases some are completely free to use. Alright, many, many are completely free to use.
312
00:54:51.330 --> 00:54:58.080
Ryan Dymek: But yeah, it's a it's a use it you pay for it as you use it most of the time. If you are going to pay. Yeah, there you go, more like a red box rental. There you go
313
00:54:58.623 --> 00:55:16.929
Ryan Dymek: or an Amazon prime, or something like that, we actually pay to, you know. Pay to watch it, or pay to use it right? That kind of thing. So it's pay. It's pay to use. Basically, if if you are paying for it, it's pay per use, not just come up with this huge investment to say, I want access to this. That's the thing is, some of this stuff
314
00:55:17.110 --> 00:55:24.179
Ryan Dymek: was available a long time ago, but it would have been very, very expensive for you to either get the hardware for it.
315
00:55:24.847 --> 00:55:47.252
Ryan Dymek: Or anything like that. So URL, to Ml, github type, yeah. So that's what I'm getting at. It's called Hugging Face. That's the name of the product. I'm gonna show it to you here in a minute. But if you literally just go do Google. Search for hugging face aws does not own hugging face, but there is a tight tight integration with them. There's a really good relationship. Hugging face chose aws, actually as their core
316
00:55:47.540 --> 00:55:54.590
Ryan Dymek: core partner, if you will, in running models and using them. So I'll explain that, and I'll show you that here in a few minutes.
317
00:55:55.990 --> 00:56:18.900
Ryan Dymek: So so again, this is this is a model. We call it a model hub. It's a place where there's all these models just sitting out there that we can use. And, by the way, in a lot of cases, you don't need to be a programmer at all to actually be able to leverage a lot of this stuff today. That's what's really cool about cloud technologies, about a lot of these tools that are available. We can just go use them
318
00:56:18.900 --> 00:56:32.739
Ryan Dymek: and not have to be a programmer to do it. And if we are a programmer we can save ourselves hours of work, if not even weeks or months of work of what this used to be. Right it used to be. People would kind of the companies that had access to this stuff
319
00:56:32.740 --> 00:57:02.289
Ryan Dymek: were special, right, and they had a huge investment, many, many millions of dollars. But now we're in a realm, or even your super small business right down the street. Maybe your local small business restaurant. That's just, you know, family owned single restaurant. They could be using this stuff and be coming up using doing this to to make their own marketing campaigns. They don't have to go out to spend a ton of money on marketing. They could use this stuff to predict. You know the old, the more traditional. Ml. Use it to predict inventory and things like that.
320
00:57:02.290 --> 00:57:04.089
Ryan Dymek: They could be doing that in the cloud
321
00:57:04.100 --> 00:57:13.619
Ryan Dymek: and not have to pay, but maybe even a couple of pennies. To be honest, in some cases some of this stuff will literally be pennies to do, and that's just crazy or free or completely free at times.
322
00:57:14.280 --> 00:57:16.979
Ryan Dymek: So it'll work across platforms. And
323
00:57:17.100 --> 00:57:28.390
Ryan Dymek: there's all these environments and stuff that are already produced for us. So aws again. Aws! Does not own hugging face. They are completely different organizations. But there is a very tight relationship here. I'll show you hugging face here in a minute.
324
00:57:28.580 --> 00:57:41.939
Ryan Dymek: Okay, but bottom line. There's a bunch of tools that are tightly integrated. If I'm in aws some of the aws tooling that's out there, and I choose some of the integrated stuff in aws.
325
00:57:42.140 --> 00:57:52.219
Ryan Dymek: If I click deep enough it'll take me back to hugging face in a lot of cases, and vice versa. I can be in hugging face and say, I want to run this model, and it might take me to aws
326
00:57:52.580 --> 00:58:18.550
Ryan Dymek: Amazon. By the way, I've kind of assumed that some of you might already have a cloud knowledge just high level here if you didn't know. Aws, Amazon web services is Amazon's prime business shouldn't use the word prime. There it is their core business. Actually, Amazon makes more money supplying servers and infrastructure to companies than they do selling products on Amazon com.
327
00:58:19.050 --> 00:58:23.699
Ryan Dymek: Ironically enough, amazon.com today is is basically their side business.
328
00:58:23.870 --> 00:58:28.000
Ryan Dymek: Amazon aws basically runs the world.
329
00:58:28.391 --> 00:58:42.380
Ryan Dymek: We use Netflix. I use Netflix as an example. A few minutes ago, Netflix, one runs 100% on Aws servers. If you didn't know that so Amazon is running Netflix, when it comes to an infrastructure and hardware perspective.
330
00:58:42.450 --> 00:59:08.560
Ryan Dymek: Okay, so if I want to go, use machine learning work if I want to go, use machine learning technology, I don't need to spin up really expensive servers anymore. Right? I can just go use Amazon servers, or you can use others you use Google or azure. But Amazon is definitely the big one here. Because hugging, hugging face chose them over companies like Google for some very important reasons. And specifically, it was just their breadth of of options available.
331
00:59:08.970 --> 00:59:14.609
Ryan Dymek: all right. And and there's sheer size quite frankly. So let me actually show you this.
332
00:59:15.790 --> 00:59:19.030
Ryan Dymek: so I'll just show you hugging faces interface here.
333
00:59:19.530 --> 00:59:41.860
Ryan Dymek: So this is hugging, facing. I know some of these names that we see in. You know these technologies today. It's kind of kind of crazy. Yes, hugging face is actually a product. Okay, I I haven't just been, you know, making something up here. Okay, that's his name. Alright. So that is the name of the product you'll see here, it's hugging face. Dot CEO, put this in the chat just for anybody else that wants to join along.
334
00:59:41.870 --> 00:59:46.929
Ryan Dymek: If you, you know, you can get a free account here. You don't have to pay anything to, you know. Create an account.
335
00:59:46.970 --> 00:59:51.300
Ryan Dymek: It's a good idea to create an account, because you can do things in here when you have an account that you can't do without.
336
00:59:51.590 --> 00:59:54.180
Ryan Dymek: But if I come in here and go to models.
337
00:59:54.731 --> 01:00:05.310
Ryan Dymek: Actually, those are my following models up here in models. Okay, I'm my account here that I'm using is just for demo purposes. So I don't really use it on a regular basis, so it doesn't have a lot of stuff populated.
338
01:00:05.460 --> 01:00:11.110
Ryan Dymek: But if I go to models here, look at this over a half a million models available for us to use.
339
01:00:11.230 --> 01:00:28.420
Ryan Dymek: So these are things that big organizations have been getting lots of popularity around. And we can just go use their models right again. A lot of cases. It may be completely free in other cases. You know, some sort of a pay per use. Or however, we might want to use it right.
340
01:00:28.670 --> 01:00:47.776
Ryan Dymek: But the point is, you can kinda scale that and just basically spend as as much as you wanna spend. You don't have to go have this huge investment like we once had to do. Okay? So if I come in here, I can sort by, you know. Different. You know, popularities and things most likes most downloads. So this
341
01:00:48.380 --> 01:01:08.510
Ryan Dymek: there's this stable diffusion is really, really popular. And this company right here. Stability. AI is produced a bunch of stuff here. Okay, so these are th, there's a this is, I just clicked into it. This is one of their models that they've produced. And I can just use this. Okay. So you see over here, I can deploy it.
342
01:01:08.560 --> 01:01:14.859
Ryan Dymek: So if I want to use this model, I can actually use hugging faces, interfaces to actually run it and use it.
343
01:01:15.090 --> 01:01:41.669
Ryan Dymek: I could also use Amazon. You see, right here, this is says Sagemaker, is an Amazon product. It's Amazon's like core machine learning or core AI tool. So this tool sagemaker will allow me to to take this and run it in aws right from here. It's actually a quite easy thing to do. It actually shows me the code. If I am a programmer who would show me the code that I would need to use to basically plug this right into my application.
344
01:01:41.810 --> 01:01:43.859
Ryan Dymek: And I could use aws to do it.
345
01:01:44.150 --> 01:02:08.729
Ryan Dymek: Okay, so this, this provides me kind of whatever I need to just kind of just make this happen. Okay? And so this, this particular tool, this particular model, I should say. You know, produces images. Okay, there's a lot of different image models out there, and they have different behaviors and different quality of outputs. In some cases, maybe it's better at producing one thing than another and so on.
346
01:02:08.950 --> 01:02:12.380
Ryan Dymek: And so this is based on basically text
347
01:02:12.390 --> 01:02:18.320
Ryan Dymek: that you could put in. In fact, let me see here, if I go to spaces
348
01:02:18.410 --> 01:02:19.680
Ryan Dymek: me, actually do this.
349
01:02:20.110 --> 01:02:23.619
Ryan Dymek: This is all within hugging face. If I go to spaces.
350
01:02:24.270 --> 01:02:30.150
Ryan Dymek: And I find that same. There's actually a space for this show you what a space is here in a second.
351
01:02:30.692 --> 01:02:34.719
Ryan Dymek: Let's go like most likes. Okay, stable diffusion right there.
352
01:02:35.440 --> 01:02:40.590
Ryan Dymek: So if I come in here, stable diffusion, let's let's make an image. Now.
353
01:02:40.750 --> 01:02:42.050
Ryan Dymek: I
354
01:02:42.450 --> 01:02:54.360
Ryan Dymek: I'm always leery about this in a live, Demo, because you actually have no clue what it's gonna produce. So this can. This can actually be a little bit tricky at times. But let's let's give this a try.
355
01:02:55.570 --> 01:03:01.499
Ryan Dymek: I like I was playing around with this earlier before our sessions to come up with some good prompts to play around with.
356
01:03:01.550 --> 01:03:03.795
Ryan Dymek: But let's do. Let's do this.
357
01:03:05.200 --> 01:03:08.569
Ryan Dymek: let's do a water color painting
358
01:03:10.460 --> 01:03:13.300
Ryan Dymek: of the Dallas
359
01:03:13.770 --> 01:03:14.890
Ryan Dymek: Texas
360
01:03:15.020 --> 01:03:18.260
Ryan Dymek: skyline. I'm in Dallas, so I figured. Hey? Why not
361
01:03:18.440 --> 01:03:21.710
Ryan Dymek: Dallas? Texas skyline using
362
01:03:22.320 --> 01:03:23.400
Ryan Dymek: than
363
01:03:23.570 --> 01:03:24.830
Ryan Dymek: goes
364
01:03:25.650 --> 01:03:27.330
Ryan Dymek: starry night
365
01:03:27.490 --> 01:03:29.520
Ryan Dymek: as inspiration.
366
01:03:31.280 --> 01:03:33.109
Ryan Dymek: So it's it's this is just
367
01:03:33.150 --> 01:03:43.049
Ryan Dymek: something I decided to write. Okay, it's just something I came up with all in my, you know, on my own. Here, I'm just playing around with something I want to produce. So I'm actually making. And I'm gonna make an image here.
368
01:03:43.530 --> 01:03:49.190
Ryan Dymek: and this will produce me something unique. I want to point out it is not
369
01:03:49.250 --> 01:03:58.750
Ryan Dymek: copying other work, but it's got probably thousands, tens of thousands, hundreds of thousands of pictures of the Dallas skyline that it knows about
370
01:03:58.820 --> 01:04:24.069
Ryan Dymek: right? It's got now, is it? Always gonna be right and accurate, you know, you know, like I said, it's it's subject to errors, I mean, could it incorporate some building in this picture? That's not even in Dallas? Could it plug in something from from from New York, or something, you know. Absolutely. That is, that kind of stuff is possible. So you have to kind of, you know, inspect it right? But I I like this one. This was, you know, again, I don't. I didn't know what it was gonna produce. Okay, it's unique every time.
371
01:04:24.470 --> 01:04:34.830
Ryan Dymek: And so here's one I like that. You know. This kind of looks like the space needle to me in Seattle. But we have this. We have this ball, this, this building.
372
01:04:35.090 --> 01:04:44.500
Ryan Dymek: get the name of it off top? My head. But in any event, yeah, this is this is actually producing completely unique works of art. And somebody might see this. And look at this. This one looks like it's framed. And it's got this like.
373
01:04:44.570 --> 01:04:47.800
Ryan Dymek: you know, it looks like it's sitting on a desk or something. This one looks like it's
374
01:04:47.810 --> 01:04:54.099
Ryan Dymek: sitting somewhere on a wall. I don't know. That's all. AI driven too. Okay, it's not like. It's not like it's
375
01:04:54.130 --> 01:04:59.710
Ryan Dymek: it just found an image on the A reunion tower. Thank you, Andres.
376
01:04:59.940 --> 01:05:01.879
Ryan Dymek: It was. It was escaping my mind.
377
01:05:02.141 --> 01:05:17.460
Ryan Dymek: But an image like this. You know, it's it's not like it just found something on the Internet. And just is is just showing it to us. Okay, this is a unique piece of work. Okay, so I just need to point that out. And so this is just a way to kind of sandbox this, or play around with it.
378
01:05:17.932 --> 01:05:42.169
Ryan Dymek: Are there products out there that productize this? Absolutely. I can do this in Chat Jpt, I can do this in in in Google's Gemini, a lot of these products. Now, I can generate images. Well, guess what? Behind the scenes it's not Chat Jpt or Gemini doing it. It's using models that are already incorporated into it. In fact, in a lot of cases, it's using stable diffusion to do it.
379
01:05:42.280 --> 01:05:53.694
Ryan Dymek: Okay, so what happens is you can take this product, and you can make your own application that embeds this into it. And you can have this capability in your product. If you want right, you can be a programmer. And you can.
380
01:05:53.980 --> 01:05:58.909
Ryan Dymek: you can actually embed this technology into your own applications. Okay?
381
01:05:59.250 --> 01:06:22.299
Ryan Dymek: And so that's exactly what those third party companies are typically doing is they're using a model. Now. They've usually done lots of. They're incorporating a lot of models, and they're incorporating. Maybe they've supplied it a whole bunch of its own data. So what I can do is I can take a model like stable diffusion or anything else, and I can actually enhance it for my own use. This is where things get interesting is like, I said, this product here
382
01:06:22.520 --> 01:06:26.830
Ryan Dymek: is basically text to image. Okay, that's what it does text to image.
383
01:06:26.960 --> 01:06:32.020
Ryan Dymek: but it may not be fully trained on every image that's ever been created right?
384
01:06:32.374 --> 01:06:43.409
Ryan Dymek: So what if? What if I want something a little bit more unique. Maybe maybe I need to create images of things that are very niche to my market and my industry, my business.
385
01:06:43.490 --> 01:06:53.270
Ryan Dymek: I may want to supply it, more data and more images to enhance it. And that's where I can take these and use this as what we call a foundational image.
386
01:06:53.300 --> 01:06:55.579
Ryan Dymek: Or, excuse me, I'm sorry a foundational model.
387
01:06:55.870 --> 01:07:00.930
Ryan Dymek: and use it to use it as a basis to then make my own.
388
01:07:01.070 --> 01:07:25.449
Ryan Dymek: Okay? Who owns the rights of the image generated. I would argue, it depends on the product. Okay, that's a licensing thing. Depending on the model. So you do have to read this model by model, and how the licensing works for that. But that is where it's it's at the end of the day. It's yours most of the time you pay to use the model. But the production of the model
389
01:07:25.450 --> 01:07:41.560
Ryan Dymek: is many times yours. But I'm not gonna promise you that. Okay, it's all gonna come down to licensing. So you do read the licenses. Okay? So in that regard, if I come back to just stable diffusion in general, let me go back to models. I'll show you the licensing here. Just show you where to find that.
390
01:07:41.680 --> 01:07:43.738
Ryan Dymek: So if I go back
391
01:07:49.280 --> 01:07:55.180
Ryan Dymek: if I go back to this one's by stability. AI stable diffusion. If I come in here.
392
01:07:55.190 --> 01:08:04.470
Ryan Dymek: you'll notice in here there's all kinds of links and information right? And so inside of here there should be a link to its licensing.
393
01:08:05.410 --> 01:08:15.180
Ryan Dymek: There's files and versions here, and I believe the license is usually in here right here. So if I click on this, it'll tell me exactly the licensing for this model. Okay?
394
01:08:15.380 --> 01:08:35.909
Ryan Dymek: So go ahead and and use this information to answer that question right? And I know it's a lot to read through. But you that may be a legality thing there. As to the the models you you choose to use. You might need to know how you can use those and and things like that right now, if you use a a tool like chat, Gpt, or others. It may have its own
395
01:08:36.159 --> 01:08:50.379
Ryan Dymek: published licensing as well. So just kinda be aware of that. So you have a model licensing, and then you've got any software that might be leveraged might have its own as well. So licensing can get a little tricky here. Okay, but keep in mind. It is new creative work, right? It is new stuff.
396
01:08:50.380 --> 01:09:13.194
Ryan Dymek: So but it can get interesting. There have been, I think, the you know, the courts and the the laws and all that. Obviously, I I'm you know I'm not a legal representative here, so little disclaimer. But you know, there's still deciding right. There's there's still a lot of stuff that hasn't been decided on this, and I'm sure things will come up over time. Well, this is kind of where we're at on this. So if you were to use it
397
01:09:13.800 --> 01:09:39.330
Ryan Dymek: and and publish it as your own work, that's gonna be that's gonna be tricky most of the time, though, doing it for things like marketing or using it to make predictions on things. Or what have you? You're you're usually pretty safe doing that, but it's it's when you actually publish it as your own creative work. And you're making money off of it, you know, if I were to turn around and sell a piece of you know, piece of art that AI makes. You know, there may. There's probably gonna be stuff I'm gonna have to look into there. Right?
398
01:09:47.100 --> 01:10:06.949
Ryan Dymek: Yeah. So dinesh, that's a that's a great point there. As well. So yeah, there, sometimes there are things in there that says, Hey, look! We'll we'll indemnify you of, because there is that hallucination, aspect of things. There is the fact that it learned from something so can it produce something that looks very similar to an original? They could, it could, and that that can get tricky
399
01:10:07.090 --> 01:10:13.770
Ryan Dymek: can get very, very tricky, because if it is producing something that is, you know, 95% similar to something else.
400
01:10:14.243 --> 01:10:27.700
Ryan Dymek: You, you do have a situation there. So you wanna make sure that you understand each product each tool that you might wanna use, and and the licensing behind it, and so on. But that's a big question that's far beyond where what I can answer today for you. Right?
401
01:10:28.560 --> 01:10:32.960
Ryan Dymek: So that's hugging face. Okay, so this basically is a huge repository
402
01:10:33.040 --> 01:10:41.230
Ryan Dymek: of models available for consumption available for use. And then the next thing is, what if I want to take one of these models
403
01:10:41.300 --> 01:10:43.009
Ryan Dymek: and I want to train
404
01:10:43.365 --> 01:10:45.570
Ryan Dymek: I want to train on top of it.
405
01:10:45.660 --> 01:11:01.429
Ryan Dymek: Okay, so what I can do is I can come in here and go. Okay. Stable diffusion. Look at this. I can do my own training. Now, training it depends on the model. Sometimes you'll see this dropdown, and it'll say this. In other cases it'll actually say, sage maker in aws. So it kind of depends on the product.
406
01:11:01.440 --> 01:11:04.855
Ryan Dymek: Let me show you. Just see if I can find an example of that.
407
01:11:05.590 --> 01:11:07.730
Ryan Dymek: I made a note for myself.
408
01:11:08.220 --> 01:11:09.970
Ryan Dymek: Yeah, there's one that I know.
409
01:11:11.720 --> 01:11:12.560
Ryan Dymek: Okay.
410
01:11:13.650 --> 01:11:15.150
Ryan Dymek: so if I
411
01:11:15.790 --> 01:11:18.180
Ryan Dymek: look at this, I think it's this one Zephyr.
412
01:11:18.380 --> 01:11:31.620
Ryan Dymek: I go train. Yes, so this one right here. There's a sagemaker. So what I can do is I can actually say I want to do a new training job. I want to bait. When you see the word train, it means I want to supply data of my own to improve it.
413
01:11:31.660 --> 01:11:44.235
Ryan Dymek: Okay, that's basically what that means. Okay? And so in that regard I can, I can do fresh training jobs based on this model. So I can use this as a starting point and enhance it for my own use.
414
01:11:45.000 --> 01:11:52.169
Ryan Dymek: but the most basic principles or basic understanding around that improving it for your own use might be industry specific might be business specific.
415
01:11:52.250 --> 01:11:55.509
Ryan Dymek: Let's say I am a manufacturing company.
416
01:11:55.790 --> 01:11:57.720
Ryan Dymek: and we manufacture parts.
417
01:11:57.850 --> 01:12:07.559
Ryan Dymek: or some some hardware product. And so there's specialty bolts and and and tools and brackets, and all that kind of stuff.
418
01:12:07.620 --> 01:12:10.109
Ryan Dymek: We might actually use AI
419
01:12:10.120 --> 01:12:13.970
Ryan Dymek: and cameras to actually be able to identify those parts
420
01:12:14.130 --> 01:12:19.409
Ryan Dymek: within within that ecosystem. So it can just see it and tell you what that part is.
421
01:12:19.480 --> 01:12:22.279
Ryan Dymek: and that might be done on some sort of automated
422
01:12:22.360 --> 01:12:32.589
Ryan Dymek: pipeline, some sort of automated production line, right? But that's that's even more traditional. AI, that's actually not generative. AI, that's that's kind of things that we've been doing now for quite some time.
423
01:12:33.100 --> 01:12:36.800
Ryan Dymek: But so that's the thing, too. Here is on, on hugging phase
424
01:12:36.820 --> 01:12:42.940
Ryan Dymek: the models that are here. These are not all generative. AI, okay, I'm sorry. Keep doing that. This model up here.
425
01:12:42.970 --> 01:12:46.820
Ryan Dymek: These these 500 plus 1,000 models
426
01:12:47.344 --> 01:13:08.870
Ryan Dymek: are just models. Some of them are more genai specific where they are producing new work, others are more about, you know, identifying, categorizing, predicting fraud, anomaly, detection, simple stuff like voice to text, text, to voice, which have actually, you know, been, we've been doing that for quite some time. I say simple, but
427
01:13:08.900 --> 01:13:16.610
Ryan Dymek: it's just simple, because it's been done for a long time. It's not new and unique. Necessarily.
428
01:13:16.990 --> 01:13:27.009
Ryan Dymek: These models. There's a massive library here right now. If I go into aws. Aws also has some of its own models, and I'll show you that here in a little bit as well.
429
01:13:27.770 --> 01:13:37.099
Ryan Dymek: So there's this product called Sagemaker in Aws and Sagemaker is Amazon's core technology that allows us to create
430
01:13:37.644 --> 01:13:46.099
Ryan Dymek: create models, train models, host models, run models, create applications that integrate with it, and so on. Basically everything.
431
01:13:46.210 --> 01:13:50.339
Ryan Dymek: Everything. Ml, that we want to do. We can do pretty much in Saudi maker.
432
01:13:50.750 --> 01:13:59.219
Ryan Dymek: that is, for custom work, right custom. And or if I just want to run a model. Okay, sagemaker has the ability of just simply running a model somebody else produced
433
01:13:59.550 --> 01:14:08.950
Ryan Dymek: all the way to. I want to completely make my own custom models altogether, starting from scratch or using other models as a starting point.
434
01:14:08.980 --> 01:14:10.079
Ryan Dymek: all of the above.
435
01:14:10.120 --> 01:14:12.940
Ryan Dymek: So there's tools in sagemaker.
436
01:14:13.404 --> 01:14:28.409
Ryan Dymek: There's a tool in here called like data wrangler. As an example. This allows me to manipulate and work with data that I'm gonna use for training and 0 code. If I want, I can use code to do things if I wish. But I can also just say 0 code. Here's what I just kind of tell it
437
01:14:28.410 --> 01:14:45.130
Ryan Dymek: what you want to do with your data, and it will structure, the data and kind of get it into a a point at which it's ready for training your own model. Okay? Training your own model basically means. Here's all my historical data that I want to use. And I want to use that for for future work.
438
01:14:45.330 --> 01:14:52.739
Ryan Dymek: And so data wrangler would be a great tool to help us with that. And that's just one feature of stage makers. So
439
01:14:53.030 --> 01:14:59.529
Ryan Dymek: obviously, we can't teach a sagemaker in the short time we have together. In fact, we're kind of getting closer to the end here.
440
01:14:59.780 --> 01:15:07.100
Ryan Dymek: But Sagemaker is A is not one product. Okay, Sagemaker is a whole suite of products of its own. And this is in aws.
441
01:15:07.200 --> 01:15:11.729
Ryan Dymek: And so, as an example, it's got a product called sagemaker training.
442
01:15:11.770 --> 01:15:24.799
Ryan Dymek: and that that product, all it does is trains models. That's its sole purpose. Then there's another model called Sagemaker Endpoints. I'm sorry another product called Sagemaker Endpoint. So what that will do is allow us to host a model
443
01:15:24.820 --> 01:15:28.530
Ryan Dymek: and allow us to programmatically work with that model that is now running.
444
01:15:28.720 --> 01:15:45.809
Ryan Dymek: So Sage Maker's got a lot of these different elements, and we don't have to pick one or the other. We can say you know what if I wanted to train something in sage maker, but run it I don't know. Run it in Google. I could do that. The outputs of these models are not proprietary to Sagemaker. Okay?
445
01:15:46.063 --> 01:16:07.390
Ryan Dymek: So if I build a model anywhere. That model is universal. That's why hugging face works. Because you can just create models in any system. I could go create a model in Google, create a model in azure, create a model in aws doesn't matter. The model that I produce. I could go put on hugging phase and make it, you know, make it useful. Make it usable by anybody. I wanna make it. I could make it private. I can make it public.
446
01:16:08.220 --> 01:16:11.149
Ryan Dymek: But I could just use sagemaker just for the training piece. If I wanted.
447
01:16:11.200 --> 01:16:23.139
Ryan Dymek: or I could use sage Maker as a way to run a model that somebody else made at other some other point in time. Right? So it's got a lot of specialized tools built into it. We're not gonna get into all of them.
448
01:16:23.170 --> 01:16:27.430
Ryan Dymek: but it's got some geospatial spatial work it can work with.
449
01:16:27.540 --> 01:16:41.269
Ryan Dymek: Sagemaker is a plug and play kind of tool if you want it to be, and all the way to people that are have a, you know, a masters in data science, and want to write their own algorithms and want to do everything all the way down to the lowest, deepest level you can get.
450
01:16:41.330 --> 01:16:43.880
Ryan Dymek: You can use Sagemaker to do that, too. Okay.
451
01:16:43.900 --> 01:16:50.060
Ryan Dymek: yes, sagemaker is designed for Ml, that is its sole purpose. The whole product suite is all Ml.
452
01:16:50.680 --> 01:17:06.849
Ryan Dymek: Or and which includes generative AI, right and deep learning, and just for traditional Ml. All of the above stage maker will work great, for it'll manage the entire pipeline. By the way, so something that a challenge that AI has
453
01:17:07.320 --> 01:17:09.607
Ryan Dymek: in general. Okay,
454
01:17:11.140 --> 01:17:17.449
Ryan Dymek: is this concept that's in slang terms, people would actually say.
455
01:17:17.710 --> 01:17:27.330
Ryan Dymek: would actually say, Model rot! That's a term I've heard. I've never really liked that phrase. I don't know if anybody here has heard this or not, but the idea is also what we call
456
01:17:28.512 --> 01:17:31.400
Ryan Dymek: concept drift is another term for it.
457
01:17:31.440 --> 01:17:43.654
Ryan Dymek: So the idea with models, one of the problems with it is a lot of times they're built with current data knowledge. Right? They're built with some sort of inputs. Well, those inputs change with time. Right? Think about it.
458
01:17:44.140 --> 01:17:49.590
Ryan Dymek: I would use an example of, let's say, retail retail companies
459
01:17:49.600 --> 01:17:56.970
Ryan Dymek: wanting to do a marketing campaign or purchasing behaviors. Let's kind of focus on purchasing behavior for a second.
460
01:17:57.030 --> 01:18:08.960
Ryan Dymek: And you have this retail company that was Pre Covid. And then Covid rolls around and everybody's getting locked down. No travel is happening. Everything changes right. Everybody's doing things remote. They're not going to the store anymore.
461
01:18:09.330 --> 01:18:17.300
Ryan Dymek: And so what happened just now at that point is the model. The model's effectiveness basically
462
01:18:17.540 --> 01:18:27.509
Ryan Dymek: degraded almost overnight. It was trained on some information and current information that was about purchasing behaviors or customer behaviors.
463
01:18:27.550 --> 01:18:47.979
Ryan Dymek: And then those behaviors changed. Well, that's the whole thing about models is the way models are are built is they're built on the environment today, right? And that environment around the model changes. And so, as the world changes, the model has to be updated. And so a lot of times, these models, you'll notice if you go back. If you go to hugging phase, you'll see a lot of this stuff.
464
01:18:48.060 --> 01:18:59.699
Ryan Dymek: You won't see models that were Updated, you know, 2 years ago. Okay, that's pretty rare. There might be in here out of these, you know, half 1 million or whatever. But most of these models, I mean, they're updated very regularly.
465
01:18:59.790 --> 01:19:07.590
Ryan Dymek: and the reason that they're updated very regularly is they have to be right. You want to use current information to make them as as accurate today as possible.
466
01:19:07.850 --> 01:19:16.819
Ryan Dymek: So they're constantly evolving. So there's new versions. Well, the reason I bring all that up is sagemaker is built to actually monitor the quality of the model.
467
01:19:16.890 --> 01:19:41.990
Ryan Dymek: it can automatically develop an entire automated pipeline that will monitor the model and produce new data or or collect new data and retrain and make that model a fresh model and automatically produce it. And, host that new model. It'll do that automatically. That cycle will happen in an automated way. And so I can kind of set it and forget it if I want and let that model work just just happen automatically. Stage maker will do all that. Okay.
468
01:19:42.150 --> 01:19:48.460
Ryan Dymek: so again, sagemaker, you can build models here. There's a there's a product in Sagemaker called Jumpstart.
469
01:19:48.600 --> 01:20:06.980
Ryan Dymek: and I should say that's a feature of Sagemaker called Jumpstart. What jump start is is basically a bunch of stuff that's already been done for you ready to go. So it's a bunch of models, basically. But it's more than that, because jumpstart can actually include automated pipelines and things around it, not just the model itself.
470
01:20:07.030 --> 01:20:22.879
Ryan Dymek: but it does have a hub of foundation models that you can tie into. So you'll notice in jump. Start in, Sagemaker. A lot of these reference back to hugging face. This is that tight integration between hugging face and aws. So if I go into aws as sagemaker.
471
01:20:23.550 --> 01:20:28.339
Ryan Dymek: I'll be able to find models and stuff in there. But a lot of those models
472
01:20:28.780 --> 01:20:32.159
Ryan Dymek: are are sourced from hugging phase. Okay?
473
01:20:33.500 --> 01:20:43.549
Ryan Dymek: So in hugging face. So I want to point out, there's a question here about the difference between models and spaces. A space is, is more like a collection.
474
01:20:44.280 --> 01:20:46.550
Ryan Dymek: in some cases a collection of models.
475
01:20:46.580 --> 01:20:54.010
Ryan Dymek: while a model itself is the is the actual entity or the actual tool or library that we're basically using
476
01:20:54.080 --> 01:21:18.159
Ryan Dymek: right? So that's all it is. Spaces is just a is and spaces is a place where I can actually kind of play around a lot of times a lot of times in spaces. I can actually use the tool like we saw that right model. So so the model, the model is actually the trained knowledge. Okay, that's the raw model is a bunch of algorithms and stuff. Okay, it's like, it's the results
477
01:21:18.160 --> 01:21:29.119
Ryan Dymek: of the training process. It's the results of it going through and looking at all that past data and producing some knowledge. That's what the model is. The model itself is not runable.
478
01:21:29.470 --> 01:21:44.229
Ryan Dymek: not directly. The model itself is the brains. And I have to have some sort of software to run that model. Right? I have to have some way of actually running it and hosting it and making it useful or usable. Okay, the model is actually just the logic.
479
01:21:44.380 --> 01:21:51.789
Ryan Dymek: Okay? So when I go to spaces in hugging phase, let me go back to that real quick. I go to spaces
480
01:21:52.840 --> 01:21:58.680
Ryan Dymek: and in spaces we've got. Let me just you know, we're going to get short on time here.
481
01:21:59.240 --> 01:22:15.740
Ryan Dymek: I pull up that same sustainable division space here, right in this space. What it's done is it's actually taken that model and hosted it and using it. So I can kind of play with it. It's a kind of a what we call like a sandbox, basically a place where I can leverage that model.
482
01:22:15.750 --> 01:22:35.269
Ryan Dymek: So the model itself is actually just the brains, the the actual, the actual learning that's been done. But then the space in in in hugging face is actually a place where it can run, and I can explore it. I can play around with it. In this case a bunch of different prompts that you might put in for stable diffusion demo here
483
01:22:35.550 --> 01:22:37.219
Ryan Dymek: as part of the space.
484
01:22:37.310 --> 01:22:54.379
Ryan Dymek: Okay, so that's the main difference between the 2 space is a way to run it and play around with it. Okay, so let me just briefly show you sage maker since time's kind of getting tight here. So model training and deploying at scale just real quick. Again, I can train models and sagemaker. I can run them. I can host them.
485
01:22:54.560 --> 01:23:05.690
Ryan Dymek: So for like production use right? The spaces is not really, really for production use here. This is a way to kind of demo it explore it. Whatnot. But in production, I'm gonna have to have a running somewhere.
486
01:23:08.560 --> 01:23:32.639
Ryan Dymek: yeah. And then, of course, you can also use transformers, library and things like that. Randy actually mentioned that. So this is, I'm I'm not getting into the deep dive. Obviously, in 2 h we can only go so deep. But yeah, you can work with the models from hugging, hugging space. And the nice thing is, though I don't even have to do most of that with Sagemaker I can go into Jumpstart and use those models right now, right? So in an actual production state. So I'll actually show you that here.
487
01:23:33.020 --> 01:23:39.870
Ryan Dymek: actually real quick. So, stage maker, jump, start right here. There's a whole bunch of models available. I'll just show you these firsthand.
488
01:23:39.920 --> 01:23:42.159
Ryan Dymek: So let me actually show you that.
489
01:23:42.220 --> 01:23:44.689
Ryan Dymek: So, stage maker.
490
01:23:46.100 --> 01:23:59.279
Ryan Dymek: make sure it's still logged in. Okay, I logged in ahead of time into my into my aws web console. So this is in a Amazon web services. This is the sagemaker product you can go to, you know whatever products you need to go into.
491
01:23:59.590 --> 01:24:13.800
Ryan Dymek: But if I come down here, you'll see there's this jump start and these foundation models here. And look at that stable diffusion. This is one of the most popular models. By the way, it's why I keep using it, and it keeps coming up like at the top of the list. It's just very, very popular for image generation
492
01:24:14.010 --> 01:24:22.280
Ryan Dymek: again. You want to create images, for I don't know. Maybe you want to generate Logos right? You want to generate even pictures.
493
01:24:24.420 --> 01:24:26.000
Ryan Dymek: pictures for
494
01:24:27.090 --> 01:24:35.480
Ryan Dymek: like, is I getting at like head shots or something right? And like so maybe I've got a profile I'm setting up, and I want pictures of
495
01:24:35.490 --> 01:24:38.360
Ryan Dymek: of me. I mean, there's ways to actually like
496
01:24:38.430 --> 01:24:56.040
Ryan Dymek: make those a professional looking image, using your pictures right? Using pictures of yourself to generate a new professional image. That used to take, you know, photographers, you know, days and weeks on site, and you have to, you know, go. And they'd have to do all kinds of work to manipulate the images and get it right.
497
01:24:56.070 --> 01:25:02.049
Ryan Dymek: And now you can use these tools to just generate that stuff. There's there's productized tools out there to do that.
498
01:25:02.340 --> 01:25:13.249
Ryan Dymek: And so, in any event, you see this model right here. If I view this model, I'm in aws, I'm in sage maker, and there's table diffusion right here. And if I go back to this model card right here I clicked into it
499
01:25:13.480 --> 01:25:28.020
Ryan Dymek: this model card. Look where it takes boot. Look where it takes me right. You see how there's this really tight integration between hugging face and aws, and so specifically aws, was again chosen by. So by hugging face and said, Look, that's who we're using.
500
01:25:28.140 --> 01:25:46.809
Ryan Dymek: And so they came out and did all kinds of press releases a couple of years back around that, and there's a deep dive integration there. And so that's that's again. These are those foundation models we were talking about. Right? So jump starts in Sagemaker are a way for me to kind of again. Just jump, start, be able to just get going very quickly and really with very little work.
501
01:25:46.890 --> 01:25:52.139
Ryan Dymek: I can come in here to jump start foundation models. I can take this model, and I can choose to deploy it.
502
01:25:52.530 --> 01:26:15.599
Ryan Dymek: I would actually have to choose to acknowledge the licensing behind it and the pricing model, if there is any pricing the nice thing is, if you run it in aws, aws, deals with the with the billing. You don't have to worry about paying some third parties, or whatever right it just kinda all comes through your aws, Bill, and you can kinda see exactly how you're spending your money, what the costs are a little bit cleaner and things like that. So
503
01:26:15.838 --> 01:26:32.990
Ryan Dymek: you, I would have to subscribe to this here and and acknowledge any kind of this. This actually is a lot of times a paid for product like, I said, you don't pay one time for it in some big lump sum you just pay, in some cases fractions of a penny to use it each time, and it depends on how much you use it, as how much you spend.
504
01:26:33.060 --> 01:26:38.499
Ryan Dymek: Right? If it's for for production use as opposed to development, use right? Those are all variables.
505
01:26:38.940 --> 01:26:47.409
Ryan Dymek: And so, you know, there's all kinds of options in here in sagemaker around using all these different models. I can choose whatever I want to do there.
506
01:26:47.500 --> 01:26:54.490
Ryan Dymek: And there's a question about, can I use a stage maker for vision? Yeah, absolutely. I forget where that question was.
507
01:26:56.100 --> 01:27:14.120
Ryan Dymek: train vision? Ml, yeah, right here, right? Computer vision models, natural language processing models. There's all that we've got things like again, governance. I know some people have mentioned or talked about things like like risk or security. Things like that have been brought up. There's a whole bunch of stuff you can do around that.
508
01:27:14.986 --> 01:27:30.139
Ryan Dymek: Here's the training tool right here. I can come in here and just do a training job. Now, I did one before class just to show you. So I did a training job. I wanna show you this, this training job took me 4 min. I have some data that I fed it and I performed a training job and I produced a model
509
01:27:30.160 --> 01:27:35.840
Ryan Dymek: right here in stage maker, and it took 4 min to produce it. Okay, so in this case.
510
01:27:35.890 --> 01:27:49.129
Ryan Dymek: if I scroll all the way down, I actually end up with a model package right here. Basically, I have this model. This is the the file name, if you will for it model. Dot tart jz, is the, you know, the actual output here.
511
01:27:49.500 --> 01:28:11.979
Ryan Dymek: But I can take that and run it. Okay, in layman's terms to define a model. It is the knowledge. Okay, again, what is a model? A model is the knowledge. It is the brains. The the definition is that is a trained algorithm. So imagine, mathematically speaking, you don't have to be a math expert to understand this. But you take an algorithm, if you remember, back from without getting deep into the math.
512
01:28:12.030 --> 01:28:24.489
Ryan Dymek: If you have an algorithm that's got all these variables. Okay, you have to feed some values into those variables and those values essentially help it make its predictions or help it make it do what it does.
513
01:28:24.690 --> 01:28:30.040
Ryan Dymek: So an algorithm, you start with an algorithm. And then when you do the training job.
514
01:28:30.080 --> 01:28:55.849
Ryan Dymek: It. It learns the values or the variables that it needs to use. And then the model is the result of that. So the model is actually saying, here's my algorithm. With all this trained information, it knows now what kind of inputs and what kind of deals to you know what what to put into that algorithm. So basically, it's a mathematical computation at the end of the day. But that model in in layman's terms, that model is the resulting knowledge
515
01:28:56.000 --> 01:29:01.710
Ryan Dymek: of what it's learned on how to do this stuff. Right? So as an example, it's learned how to produce images.
516
01:29:01.730 --> 01:29:06.840
Ryan Dymek: Okay, so we have these stable diffusion. It knows how to produce images. It's learned how to do that.
517
01:29:06.860 --> 01:29:18.920
Ryan Dymek: That's what the model is is the ability to do that. Now you have to run it somewhere. You have to use it in some software product somewhere, some application. You have to host it. And then what I can do is take that text. Input.
518
01:29:19.070 --> 01:29:21.789
Ryan Dymek: I can say, create an image that looks like this.
519
01:29:21.930 --> 01:29:31.439
Ryan Dymek: and it'll produce an output because it's learned how to do that. Okay, so it's the. It's the actual brains. We didn't get into the depths of it. But this concept of deep learning
520
01:29:31.600 --> 01:29:34.530
Ryan Dymek: what's built on deep learning, it actually uses
521
01:29:34.560 --> 01:29:52.350
Ryan Dymek: brain terms neurological terms. Okay, a lot of the terms that are used in deep learning actually rely on the way our human brain actually works. So ways to incorporate inputs and outputs right? So in our brains, too far on this.
522
01:29:52.380 --> 01:29:56.240
Ryan Dymek: in our own brains, when we see an image, when we, when we look at something.
523
01:29:57.070 --> 01:29:59.519
Ryan Dymek: We don't see that something like you think you do.
524
01:29:59.570 --> 01:30:09.829
Ryan Dymek: What you do is, you see textures. You see colors. You see some shapes. You see, you see shadows, and there's actually different parts of your brain interpreting each and every one of those elements.
525
01:30:09.840 --> 01:30:12.660
Ryan Dymek: And then your brain puts it all together. Well, guess what
526
01:30:12.840 --> 01:30:27.320
Ryan Dymek: the models behind all this stuff does. Exactly. That is actually built off of the neurological understanding of the human brain. That's exactly how deep learning actually works, which includes generative. AI, okay, it learns the same way you and I do.
527
01:30:27.500 --> 01:30:30.379
Ryan Dymek: And it actually interprets data the same way you and I do.
528
01:30:30.770 --> 01:30:35.669
Ryan Dymek: It learns that, hey? When I see something that's textured this way, it's probably going to feel a certain light.
529
01:30:35.900 --> 01:30:36.775
Ryan Dymek: Right?
530
01:30:37.790 --> 01:30:45.509
Ryan Dymek: it learns the same thing. Okay, it learns it has an input, and it kind of has an algorithm, if you will, that kind of says, here's
531
01:30:45.520 --> 01:30:58.990
Ryan Dymek: here's what that's probably gonna feel like. And then when you touch it, you realize it feels that way it looks, you know. So that's the thing is, is we usually our brains, and we have hundreds, thousands of neurons active at any given time.
532
01:30:58.990 --> 01:31:17.749
Ryan Dymek: Well, same thing with deep learning. It's actually got all these little elements. And it's it's you have one element that's paying attention. If I'm doing an image, one element that's paying attention, perhaps, to corners and edges. Of of transitions of color, right? Another one that's just paying attention to color another one that's just paying attention to texture
533
01:31:17.960 --> 01:31:32.780
Ryan Dymek: another one that's just paying attention to size and relative size, right? Things like that. And so you know, these inputs and these outputs. And that's basically what's going on behind the scenes is in deep learning. You have massive amounts of neurons that have been built
534
01:31:33.232 --> 01:31:55.060
Ryan Dymek: programmatically, and they learn. And when they learn, they that they now know how to interpret that, and so that one little piece of the bigger picture knows how to do that one thing. But collectively, it's hundreds of these little things. Okay, that's deep learning in a nutshell. There's a that's a week long class. If we wanted to actually teach that, that's not something you learn overnight, and there's a ton of math involved in that. But
535
01:31:55.090 --> 01:31:57.839
Ryan Dymek: if you want to just kind of talk high level
536
01:31:57.890 --> 01:32:02.379
Ryan Dymek: again, a model is the actual resulting knowledge of how to do something.
537
01:32:02.870 --> 01:32:06.430
Ryan Dymek: It's it's it. That's the most basic way to explain it. Okay.
538
01:32:07.750 --> 01:32:13.729
Ryan Dymek: a model's like a recipe. It tells you how to bake a cake step by step. There you go sort of
539
01:32:14.120 --> 01:32:19.169
Ryan Dymek: the cake itself. So yeah, that's an interesting one. Right? Produce me a cake.
540
01:32:19.280 --> 01:32:22.010
Ryan Dymek: Okay, it's not just a recipe. It knows how to make
541
01:32:22.030 --> 01:32:26.239
Ryan Dymek: one cake over another cake over another cake. It understands how to bank.
542
01:32:26.340 --> 01:32:30.759
Ryan Dymek: Okay, that might be a better way to put that a model knows how to bank.
543
01:32:30.910 --> 01:32:51.209
Ryan Dymek: But collectively, it's a collection of hundreds of different elements that it's learned how to do that right. And so it might be. Everything from temperatures to understanding chemical reactions, to understanding how things rise to understanding all of that right? So it's not the actually the Ca, this actually wouldn't be the cake that wouldn't necessarily be the model.
544
01:32:51.210 --> 01:33:09.799
Ryan Dymek: The the model produces the cake, but also it's not the recipe either, because a recipe isn't dynamic right? A recipe is fixed. So imagine if you were a really good baker and you had the ability of of creating recipes that resulted in really good recipe. Really good products, right.
545
01:33:09.820 --> 01:33:15.950
Ryan Dymek: The the model is the baker. The model is the knowledge that's required to do that.
546
01:33:16.300 --> 01:33:34.349
Ryan Dymek: And it actually, it's like, if you could transfer that knowledge, anybody. I'm sure we've got some, you know. Kind of some movie goers in the house. If anybody here remembers the matrix series right. Remember when it was like, if anybody remembers that movie, you know that series of download, how to do something
547
01:33:34.590 --> 01:33:49.360
Ryan Dymek: right download that knowledge into your brain. That is a model. A model is how to do that action, how to do that thing again, how to convert voice to text, or text, to voice or text to images. It's some special knowledge that it knows how to do that thing.
548
01:33:49.730 --> 01:33:51.720
Ryan Dymek: And that's that's it.
549
01:33:52.590 --> 01:33:53.370
Ryan Dymek: Yeah.
550
01:33:54.050 --> 01:34:16.190
Ryan Dymek: I know. Yeah, I I agree. And again, layman is, it's a hard one, because it's a very complex task that there's no saying. I think I am wanting to say it was Einstein that said it, but I could be, I could be a misattributing it. But anybody can make something complex. It takes a real expert to make it simple. And that's true. This is one of those topics. It's like, how do you make this simple? There is really, it's it's not simple. But
551
01:34:16.220 --> 01:34:18.649
Ryan Dymek: we can use it in a simple way today.
552
01:34:18.870 --> 01:34:29.580
Ryan Dymek: And so that's the problem is is knowing how to how to understand it. In a simple way, we don't need to do all the underpinnings. That's what's kind of cool is, the public cloud has built this in such a way.
553
01:34:29.820 --> 01:34:35.970
Ryan Dymek: where it does all the hard work. Now it does all the heavy lifting. It does all the math for me. It does all that.
554
01:34:35.990 --> 01:34:51.459
Ryan Dymek: And yeah, to fly the helicopter. That's what that scene was. That's right. So so yeah, in this case, the hard, the heavy lifting of all of the data science that goes into it and stuff. If I don't want to do that I just want to use the results of that.
555
01:34:51.480 --> 01:34:55.439
Ryan Dymek: I can do that right. That's the beauty of the public cloud is. I can actually leverage
556
01:34:55.480 --> 01:34:57.259
Ryan Dymek: leverage all this
557
01:34:57.620 --> 01:35:00.780
Ryan Dymek: very complex technology in a very simple way.
558
01:35:00.840 --> 01:35:05.480
Ryan Dymek: I can just simply say, Hey, I want to download that knowledge of how to do that thing.
559
01:35:05.580 --> 01:35:10.580
Ryan Dymek: I don't want to actually know how to do it. I just want to use the knowledge. I just want to use it.
560
01:35:10.977 --> 01:35:15.569
Ryan Dymek: And that's basically a model. Right? I want to go use that knowledge to do X
561
01:35:15.970 --> 01:35:19.609
Ryan Dymek: programmatically, programmatically. I want to be able to use that knowledge.
562
01:35:24.530 --> 01:35:35.690
Ryan Dymek: Yeah, exactly. So, Stephen, I'm sorry. 7. Yeah, basically. Yeah. The model. The model is the knowledge. So in this case, receiving the training, being able to then then
563
01:35:35.770 --> 01:35:39.249
Ryan Dymek: produce some work of art from that training. Right?
564
01:35:39.310 --> 01:35:55.529
Ryan Dymek: So again, it's like the baker learning how to bake. And then maybe you've been able to produce its own recipes and then actually bake the cake right? The model may not do the actual baking of the cake right? That may be my software engine, but the knowledge on how to do it and how to make it would actually be your model.
565
01:35:55.970 --> 01:35:57.490
Ryan Dymek: Okay, now.
566
01:35:58.020 --> 01:36:02.869
Ryan Dymek: it's a complex. It's a complex thing, but it doesn't have to be, we can actually use the model
567
01:36:02.890 --> 01:36:05.169
Ryan Dymek: without knowing how to do everything the model does.
568
01:36:05.730 --> 01:36:06.520
Ryan Dymek: Okay.
569
01:36:06.900 --> 01:36:07.980
Ryan Dymek: I know it's a lot.
570
01:36:08.355 --> 01:36:12.940
Ryan Dymek: This is where it's really hard to have a complex topic thrown into 2 h.
571
01:36:13.222 --> 01:36:31.889
Ryan Dymek: But just kind of giving you some insight into some of this. Everyone in this class is gonna pick it up at different levels. Some people have Ml training experience already, and already familiar with that, about 25% of you or so, or maybe 20%. I think it was. Others maybe have, like Chat Gpt, or other experience in that realm, and others are just totally new to this whole topic.
572
01:36:32.213 --> 01:36:47.786
Ryan Dymek: So it it makes it tough also with a group this size and such a short time window. But you've all been wonderful. So actually, I do. Wanna show you one final product, and then we'll kinda open it up for some final questions. All of the questions have been kinda getting answered. I think hopefully along the way a little bit.
573
01:36:48.140 --> 01:36:56.580
Ryan Dymek: there's another product called Bedrock. Actually. In in aws! Let me let me go pull that up bedrock.
574
01:36:56.850 --> 01:36:58.550
Ryan Dymek: Okay, Amazon bedrock.
575
01:36:58.580 --> 01:37:04.180
Ryan Dymek: This one's a confusing one for people, because it is basically hugging face.
576
01:37:04.540 --> 01:37:07.899
Ryan Dymek: But as a internal aws application.
577
01:37:08.400 --> 01:37:12.150
Ryan Dymek: So it pretty much does everything hugging face does for the most part.
578
01:37:13.081 --> 01:37:27.129
Ryan Dymek: So a lot of people get confused as to why I would want bedrock then, because well, hugging face actually has more models. Hugging face is like the de facto standard remodel repository or model.
579
01:37:27.440 --> 01:37:32.909
Ryan Dymek: Yeah, model repository. So then, bedrock is actually a model repository in aws.
580
01:37:33.320 --> 01:37:40.869
Ryan Dymek: Well, ironically enough, this actually ties into hugging face as well a lot of times. The main reason we want something in the Cloud Provider
581
01:37:41.120 --> 01:37:46.810
Ryan Dymek: a lot of times the need to not go out of the Cloud provider. So, as an example.
582
01:37:47.380 --> 01:37:54.069
Ryan Dymek: yes, by the way, Gilmour, everything in AI comes down to a model 100%. Yes.
583
01:37:54.414 --> 01:38:08.699
Ryan Dymek: there is some model that has some particular knowledge that you are leveraging right. And now it may not come to just one model. It may be a collection of models. When you do something in Chat Gpt, it is a collection of models that chat. Gpt knows how to use and leverage. Okay.
584
01:38:08.840 --> 01:38:11.309
Ryan Dymek: so in an event, yeah. So bedrock
585
01:38:11.320 --> 01:38:14.289
Ryan Dymek: is a model repository of its own.
586
01:38:15.407 --> 01:38:27.419
Ryan Dymek: The reason you want this, let me give you an example. Let's say, we have a sensitive data situation. Okay, we've got personal personal information. We maybe even process credit cards.
587
01:38:27.490 --> 01:38:31.569
Ryan Dymek: We've got some sort of sensitive private information. Okay?
588
01:38:32.500 --> 01:38:35.469
Ryan Dymek: And then we want to do some sort of
589
01:38:35.500 --> 01:38:38.579
Ryan Dymek: some sort of AI against that data.
590
01:38:39.490 --> 01:38:44.860
Ryan Dymek: We may also have an environment on top of all of that that is not allowed to have Internet access.
591
01:38:45.460 --> 01:38:52.339
Ryan Dymek: Okay, I'm just kind of framing up the scenario. So we have this very so, you know, tight secured environment. It cannot get to the Internet
592
01:38:52.500 --> 01:38:54.279
Ryan Dymek: for for security reasons.
593
01:38:54.860 --> 01:38:57.759
Ryan Dymek: It's got sensitive information that I don't want to just
594
01:38:57.860 --> 01:39:00.330
Ryan Dymek: go out wild on the web. I got to protect it.
595
01:39:01.168 --> 01:39:05.649
Ryan Dymek: And so I might need a a an environment to govern that a bit more.
596
01:39:05.770 --> 01:39:09.349
Ryan Dymek: Well, if I can't get to the Internet. I can't get the hugging face.
597
01:39:09.480 --> 01:39:16.720
Ryan Dymek: Okay, and this is the same problem. By the way, that emerges with things like Chat Gpt, I'm sorry. Excuse me. I meant to say, Github.
598
01:39:16.850 --> 01:39:20.949
Ryan Dymek: So like Github, same thing, it's not hosted locally in my environment.
599
01:39:21.030 --> 01:39:28.079
Ryan Dymek: So in this case, if I've got a bunch of applications and software built in Aws built in the cloud.
600
01:39:28.750 --> 01:39:31.020
Ryan Dymek: And it needs to use these models.
601
01:39:31.040 --> 01:39:33.970
Ryan Dymek: And I need it to be completely secured and locked down.
602
01:39:34.110 --> 01:39:36.920
Ryan Dymek: I can't get out to hugging face, no matter how much I want to.
603
01:39:37.010 --> 01:39:48.839
Ryan Dymek: I may not be able to do that. And so that's a big reason to use bedrock because bedrock is private to aws. And now I can actually embed this in environments that normally couldn't get to hugging face.
604
01:39:48.890 --> 01:39:52.260
Ryan Dymek: So to some degree. This competes with hugging phase.
605
01:39:52.360 --> 01:40:06.519
Ryan Dymek: and also integrates with hugging face in various ways. But really it's self hosted by Amazon. It runs in Amazon. You're not having to leave Amazon to get some of these models. Okay, so this is an alternative to hugging face.
606
01:40:06.700 --> 01:40:15.810
Ryan Dymek: I would say, it's still fairly new. This product has not been around. All that long, hugging face has been around much longer. Hugging face has a much larger
607
01:40:15.870 --> 01:40:17.920
Ryan Dymek: footprint, much larger
608
01:40:18.381 --> 01:40:45.539
Ryan Dymek: adoption, if you will. So this came out. At first it kinda confused me like, why? Why do this? Because bedrock just you know what's the compelling reason to use it? Well, there are big, compelling reasons. And it's usually around what I just described. That's one big reason. Okay, but just be aware, Amazon offers its own repository. In other words, I can do everything in Amazon, and I have to leave Amazon to do it. Okay, so if I'm building up complex environments inside of aws.
609
01:40:45.770 --> 01:40:50.730
Ryan Dymek: I don't have to leave Aws to consume these models and and deploy them and
610
01:40:50.740 --> 01:40:52.560
Ryan Dymek: use them within my environment.
611
01:40:53.260 --> 01:40:57.719
Ryan Dymek: Okay, so just wanted to make you aware of bedrock? We don't need to go through it in depth.
612
01:40:57.830 --> 01:41:09.370
Ryan Dymek: but just kind of be aware of it. So the main suite of tools here that kind of make up. Everything we've discussed today really comes down to Aws's sage maker and bedrock
613
01:41:09.760 --> 01:41:27.080
Ryan Dymek: as, and these are just some features here. Just kind of pulling them up on the screen. They'll be in the recording as well. And of course you can read bullet by bullet. I'm I'm not gonna read bullets to you. You can do that for yourself. But the main thing here is again. Bedrock has a lot of overlap with with hugging face, though. Okay.
614
01:41:27.508 --> 01:41:44.010
Ryan Dymek: so these are just some features. I'm not gonna go line by line through them. Just want you to be aware of them this way. They're also in the recording. If you go re grab the recording after this, you can always pause it on these screens and see them for yourself. We don't typically share out the slide deck, but you'll see them in the recording. So I wanna make sure they're in the recording here.
615
01:41:44.390 --> 01:41:57.580
Ryan Dymek: so we don't need to go crazy into bedrock. Just a bunch of little slides with some bullets on them, as far as some features and things like that. But, generally speaking, we kind of went over the the high level, the high level of bedrock. And that is that it's very much like
616
01:41:58.083 --> 01:41:59.850
Ryan Dymek: very much like hugging pace.
617
01:42:00.260 --> 01:42:12.840
Ryan Dymek: Okay, so I'm gonna leave these. You know, this is also again, this is where bedrock adds the extra compliance. There's all kinds of bullets here around compliance standards and the ability to achieve those where I may not be able to achieve those
618
01:42:12.870 --> 01:42:15.420
Ryan Dymek: with with hugging face directly.
619
01:42:15.830 --> 01:42:18.579
Ryan Dymek: for for reasons described already. Okay?
620
01:42:18.720 --> 01:42:42.930
Ryan Dymek: And then, of course, also bedrock, maybe add some additional legal protection, too. So this is that indemnity that was actually asked earlier. Right. Copyright claims rising from this stuff. So this is where there's uncapped indemnity. That's kind of interesting right? So the another reason to maybe use bedrock as opposed to hugging face. Hugging face doesn't necessarily directly offer you this. This might be license by license.
621
01:42:43.366 --> 01:42:48.649
Ryan Dymek: So this is another reason, maybe to consider using using bedrock as well.
622
01:42:49.680 --> 01:43:00.590
Ryan Dymek: Okay, so that's it. We've got a few minutes for Q. And A. If anybody wants to throw some questions in either the Q. And a box, or in chat either one. I'm paying attention to both.
623
01:43:02.240 --> 01:43:05.769
Ryan Dymek: let's see here, just looking, following back on some of the chat comments here.
624
01:43:06.224 --> 01:43:11.050
Ryan Dymek: If I don't have the need to train foundation models nor develop new models.
625
01:43:11.090 --> 01:43:24.969
Ryan Dymek: Yeah, you don't. Well, so you so you will need a place to run the models right to actually produce them. So I showed you the sandbox and hugging phase. Right? It was actually called a spaces, right. So I was able to kind of put a prompt in there and generate an image.
626
01:43:25.320 --> 01:43:27.430
Ryan Dymek: But that's for like playing around with it.
627
01:43:27.860 --> 01:43:35.530
Ryan Dymek: If you're going to use it in production. If you're going to actually incorporate that model into a piece of software, you're going to need a place to run it.
628
01:43:35.640 --> 01:43:40.296
Ryan Dymek: And that place to run. It can be a lot of places you you actually could use
629
01:43:40.590 --> 01:43:46.239
Ryan Dymek: like paid hosting providers to run those models. You could run it in sagemaker.
630
01:43:46.410 --> 01:43:54.980
Ryan Dymek: You could also go build a virtual machine and completely do it yourself. That's a lot of operational overhead for most people. I would discourage that. But you could.
631
01:43:55.030 --> 01:44:13.570
Ryan Dymek: So the point is, you need a place to run that model and host it so it's kinda like, if you have a website. You need a place to host that website rather as hosting providers out there. You can have a website built on, you know, Wordpress, or whatever you don't just have a website. You have to run it somewhere. Same kind of thing here with a model. I have to actually
632
01:44:13.590 --> 01:44:21.080
Ryan Dymek: run it somewhere. It has to exist somewhere and have capacity applied to it, and have the ability for it to do what it does.
633
01:44:21.170 --> 01:44:41.749
Ryan Dymek: And so so you might choose stage maker to do that. But you don't have to right you. Could. You could again run your own virtual machines. You could use a third party hosting provider, I believe hugging face can actually be that hosting provider for you. But you may have to pay because there's paid options within hugging face. So it kinda just depends on who you choose to actually run the model. Okay.
634
01:44:47.110 --> 01:44:58.160
Ryan Dymek: okay, cool. And I'm just looking feedback here. Yeah. So you started kind of looking at bedrock 2 weeks back. That's where you spend most of your time. Yeah, it's cool, because I mean again, bedrock does just about everything hugging face does.
635
01:44:58.190 --> 01:45:09.600
Ryan Dymek: but because it's directly integrated in aws. More so. Not that the you know hugging face is highly integrated as well, but hugging face is not Amazon. It's not by Amazon. It's just a partnership
636
01:45:09.820 --> 01:45:27.629
Ryan Dymek: bedrock is Amazon. And so you're also gonna be. You're gonna get better support in the sense that you have a one stop shop with aws where you can say, look trying to do this thing. You've got bedrock over here, sage maker, over here, and you can use those things together, and you're gonna get full support on the whole landscape, right? Not just part of it.
637
01:45:27.640 --> 01:45:35.340
Ryan Dymek: And then also again, the the legal aspect of it. Not having to go out to hugging face is very helpful. Right?
638
01:45:39.820 --> 01:45:43.879
Ryan Dymek: Oh, yeah, so neural models. Yeah. So I am aware of the book.
639
01:45:45.290 --> 01:45:54.170
Ryan Dymek: as far as anyone currently using. I mean, I couldn't point to specific references without doing some some homework on it. But yeah, they're absolutely being used
640
01:45:55.480 --> 01:45:56.749
Ryan Dymek: widely. In fact.
641
01:45:56.790 --> 01:45:59.390
Ryan Dymek: Yeah. So I was kind of give you just a direct answer. There on that one.
642
01:45:59.450 --> 01:46:00.987
Ryan Dymek: Okay, we're about.
643
01:46:01.520 --> 01:46:05.524
Ryan Dymek: yeah, I think we're about out of time looking at further questions. Here, if you have any
644
01:46:07.540 --> 01:46:34.689
Ryan Dymek: stage maker to host, yeah, you can use sagemaker to host stage maker to train sage maker to actually monitor the quality of the output of the model. So like is what it's producing. Is it actually what you want? Is it good, you know, just as a most basic example. What if I were to predict fraud with some regular Ml model? Now, that's not generative. AI, specifically, that's just AI, that's just. Ml, but let's say I was predicting fraud. And we're a bank.
645
01:46:34.690 --> 01:46:51.780
Ryan Dymek: But 90% of what we predict is fraud is not fraud, and then turns out we're actually missing a lot of the fraudulent transactions. That's not very good quality. Right? So stage maker actually gives us the ability to monitor the reality of things where we can say, look, we predicted this is such. But we went back and inspected and realized it wasn't
646
01:46:51.780 --> 01:47:08.880
Ryan Dymek: stage maker can help us monitor that quality and actually measure its success. So it's got all kinds of features. There's just a few. It's it's got the jump starts to give us the ability to kinda just get going quickly and use models just right out of the box and go gives me the ability to host. Those models gives me the ability to train new models
647
01:47:08.970 --> 01:47:15.880
Ryan Dymek: completely from scratch or from foundation models and enhance them, gives me the ability to monitor the quality of my models
648
01:47:16.261 --> 01:47:33.809
Ryan Dymek: gives me the ability to have Cicd pipelines built into all this. So I didn't even get into that. That's a whole automation for those that understand what I'm saying. There, great, if not, that's a discussion for another day. But I have the ability of automating the whole pipeline and deployment of models and all that I can. I can automate it all with stage maker
649
01:47:35.200 --> 01:47:36.610
Ryan Dymek: great.
650
01:47:36.810 --> 01:47:41.819
Ryan Dymek: So as far as creating models. There's no prerequisites other than you need some data.
651
01:47:41.960 --> 01:47:48.899
Ryan Dymek: Now, there are data exchanges, actually something I didn't get to. But I want you just to be aware of it here, real quick.
652
01:47:48.910 --> 01:48:01.950
Ryan Dymek: There is actually what we call a data marketplace in Aws, it's called the Data Exchange. If you just do a Google search for this, you can find it. Let me actually just give you a quick link to it in the chat. So you've got it
653
01:48:02.970 --> 01:48:05.460
Ryan Dymek: alright. Put this in the chat
654
01:48:05.690 --> 01:48:10.679
Ryan Dymek: data exchange so you can come in here and actually check out data sets
655
01:48:11.058 --> 01:48:21.759
Ryan Dymek: and and actually browse. You know, thousands of data sets. So you want data on Covid medical data. It's available to you. And you could do training on that.
656
01:48:21.870 --> 01:48:51.129
Ryan Dymek: You want days right here, Covid, right here, right? You want geographical data, geological data you want data on. I don't know college graduates, you know, like percentages of things. You know, school by school. I don't know. I'm just, you know, you want various state data, country data, all sorts of things that might be made public. There's a whole data set here, a whole data engine you can use. So when you're if you're going to train your own models, only prerequisites is to be able to have something to train off of.
657
01:48:51.430 --> 01:49:00.760
Ryan Dymek: and so training implies you have some sort of information to give it right? Just like the learning process as a human being. Right? So in any event.
658
01:49:01.130 --> 01:49:03.046
Ryan Dymek: yeah, the link for that.
659
01:49:04.100 --> 01:49:25.719
Ryan Dymek: Oh, I'm sorry. Maybe that was related to the Gen. AI courses and stuff, too. So I do wanna wrap it up. I wanna hand it back actually to to to Ann, basically my producer. But yeah, we've got Jen AI classes. We can. We can teach. I I teach many of those in their multi day classes. We can go into much more depth and get some better understanding in this short window of time. We just don't have a lot of time to go into depth, so
660
01:49:26.246 --> 01:49:28.609
Ryan Dymek: I hope you all got at least something from this
661
01:49:28.650 --> 01:49:36.547
Ryan Dymek: hope. You enjoyed it, and if you want to come back and check out some of our classes, we glad to be glad to have you otherwise. And do you want to say anything.
662
01:49:36.780 --> 01:49:47.920
Axcel ILT: Brian. Yes. Oh, my gosh, thank you so much. That was that was amazing. I I learned so much here, and I also. I'm kind of glad I was muted because I laughed out loud at blockbuster and
663
01:49:48.930 --> 01:49:59.897
Axcel ILT: smiled big at the matrix. So thanks for thrown in. And I'm seeing people are asking about the recording. Yes, we will be sending the recording, and if I can just share my screen for a moment.
664
01:50:00.460 --> 01:50:02.100
Ryan Dymek: Absolutely. I'm going to stop mine.
665
01:50:02.100 --> 01:50:03.070
Axcel ILT: Okay.
666
01:50:03.070 --> 01:50:03.589
Ryan Dymek: Go for it.
667
01:50:03.590 --> 01:50:05.090
Axcel ILT: Cool. Alright
668
01:50:07.770 --> 01:50:11.479
Axcel ILT: Don't want my video on I just wanted to.
669
01:50:12.050 --> 01:50:39.270
Axcel ILT: just to reiterate what I said at the beginning for a deep dive of generative AI engineering. This is a 5 day course that goes really in depth, and it can also be customized if you have a team that needs some tailoring but if you just have one or 2 people to train, we have an open enrollment class and is actually running April 8, through 12. So I will put this URL again in the chat, and it'll also be in your your email, your your
670
01:50:39.586 --> 01:50:56.369
Axcel ILT: Post Webinar email. But also I wanted to say, the only thing better than 2 h of generative AI on aws is 2 days of this topic, and Ryan did a great job of giving a a snapshot of this course, but of course, in a in a in a full
671
01:50:56.450 --> 01:51:07.770
Axcel ILT: 2 day course, it'll be hands-on more discussion, much smaller groups. We had a great turnout for this today, which was amazing. But you know, in a smaller class
672
01:51:08.590 --> 01:51:21.049
Axcel ILT: you probably have time to do some, some more chatting, and actually get your hands on and and do this and exercises, and and the course is running April eighteenth through the nineteenth. If you're just one or 2 people, and you want to take the open enrollment.
673
01:51:21.491 --> 01:51:25.990
Axcel ILT: So I will put both of those just. I will just make sure that those are
674
01:51:26.100 --> 01:51:36.179
Axcel ILT: in the chat for you, and you'll also be you'll you'll get this as well when you get the recording which should be available. Probably send that up tomorrow
675
01:51:38.650 --> 01:51:39.630
Axcel ILT: rate.
676
01:51:40.110 --> 01:51:43.530
Axcel ILT: All right. Anything else. Anything else in the chat? Anything else?
677
01:51:44.970 --> 01:51:51.670
Axcel ILT: we need to address, or we're about at the at the 2 h, mark now, so I thank everybody for hanging in there with us.
678
01:51:53.020 --> 01:52:09.889
Ryan Dymek: Yeah, there was one comment, just in the chat, yeah, sometimes people go to do like create an image, or you create some content. It'll be totally messed up totally wrong. Definitely try it with some new prompts, or play around with that, because a lot of times it does have to do with how precise you are in your in your prompt.
679
01:52:09.890 --> 01:52:25.430
Ryan Dymek: So if you go play around with, you know Chat Gpt, or any of these image production tools or whatever it is. Sometimes be as precise as you can and and give it. You know, some really quality inputs. And then the better the inputs the better. And this is actually where there is a skill
680
01:52:25.430 --> 01:52:52.049
Ryan Dymek: in how good you communicate with AI. There's a skill involved in just that. And that's why more and more businesses are actually starting to ask for. Like chat, gpt experience. You might be a you know you. You might be kind of a non technical job, but still asked to be able to use AI and be able to, you know, interact with it. And there is a skill involved in that. That takes time to to learn how to use that. So just a just a little tip on that.
681
01:52:52.450 --> 01:53:05.429
Ryan Dymek: Alright. Otherwise. Yeah, I think I think we're done, everybody. It's been a fantastic time with you. I'm really pleased to have the opportunity to share some knowledge, and and I'm glad you joined us, and maybe we'll see you again sometime.
682
01:53:05.730 --> 01:53:15.310
Axcel ILT: Yup, me, too. Thanks everyone for for coming out. We know you have a lot to do, and this is 2 h. We really appreciate you being with us. And, Brian. Thank you for for doing such a wonderful job. Really appreciate you as well.
683
01:53:16.060 --> 01:53:16.469
Ryan Dymek: I mean after.
684
01:53:16.470 --> 01:53:30.469
Axcel ILT: Everyone have a wonderful day. I hope we see you again at a webinar or in a class and yes, the recording will be emailed out to everyone. So hope everyone has a great rest of your day and hope to see you soon. Bye, everybody.
685
01:53:30.870 --> 01:53:32.410
Axcel ILT: Bye, bye, thanks, Ryan. Bye.
686
01:53:32.410 --> 01:53:33.820
Ryan Dymek: Goodbye. Thank you. Bye, bye.