00:02
So hello and welcome to today's webinar titled introduction to Oracle Database 19 see new features and benefits.
00:12
This webinar is being presented by Barbara senior principal instructor of core technologies.
00:20
Oracle Database 19 see is the final long term support release of the Oracle Database 12 see family of products, which includes Oracle Database see
00:32
This latest release of the world's most popular database also introduces new functionalities providing customers with a multi model enterprise class database for all their typical use cases.
00:47
throughout this webinar. Barbara will introduce attendees to some of the new features available in Oracle 19 see
00:55
During this live session attendees will benefit from a short overview of Oracle's training on demand curriculum and will be given the opportunity to download additional documentation relevant to topics discuss
01:10
As mentioned during the webinar. Everyone's phones will be muted. So if you have any questions, please enter them in the Q AMP a box at the bottom of your screen.
01:22
will host a Q AMP. A session at the end of the webinar.
01:26
And if you enjoy the presentation today and are interested in learning more about training anywhere with our interactive virtual platform called I Am VP, please visit our website or contact us.
01:39
Today's webinar is being recorded and will be sent to each and every one of you. By the end of the week. Alright, let's get started. Take it away, Barbara.
default user avatar Barbara Waddoups
01:50
All right, you can go ahead and change the slide. Lori, please.
01:56
My name again is Barbara Walters, and
02:00
I'm going to be presenting on Oracle 19 seats going to be a short overview of it, we're going to
02:06
Focus on this little short area. The first thing we want to look at is general enhancements. So just slide. Change the slide out here. This is just going to be some information that is general in nature.
02:22
Is that when we start up or go out and create our database when we're using our installer. We could go through and have an option to automate that route.
02:33
Corrupt execution. So we're not going to as the VA or whoever's doing the install be queried during the process of doing the install for that.
02:42
root user credential. This is a very nice and helpful piece that we have, especially if you want to go through and automate the process. So, to the next slide please.
02:55
Something else that we have out here in 19 see is the ability for the DDC a to go out and clone or relocate a remote PTB. This is very Advent identity just sorry I've been talking for the last hour or so.
03:11
What we're looking at here is going out and if I use the Create from remote PTB command I if I was going out and creating a script within from IDC a Toronto. I could go out and through my database link request that a remote PTB so go and be cloned as a another PD and
03:36
Do I can also go out and do a reload relocate TDD command to go through and really, really.
03:44
Sorry relocate that TDD somewhere else. Now, when we do this process we take a look at the picture. We've got about CDB to at the bottom of that CDB one at the top.
03:55
Will go out and relocate TV one for just down and CDB to over to CDB to now in this case.
04:04
We would not have any data files moving as opposed to going out and creating it from a remote TDD where I could go out and have those data files moves.
04:13
So there's a little bit of a difference out here depending on what you're looking to do. But in either case, it's very, very easy to go out and do this process. Next slide.
04:26
Some additional information about European DVDs. Today, we can go out
04:31
And create a snapshot from our PD be that allows us to have more control over before we could go out and take a snapshot of a PDP as well. We just didn't have all of the
04:43
options that we have in 19 seat. So over here I have a parameter that can control the level of content. But I want to go out and use with this snapshot
04:54
I can go out and say, I'm like the staff up your snapshots be sparse where I'm getting all the data files or it can be false, where I want to go out and get all of them. So we have true or false.
05:08
On what love that particular control mechanisms. It is something that gives the DB, a little bit more option when we're going out and doing this work.
05:18
If we look down at the bottom we have some identifiers out here as far as where you could find information about
05:25
Your new snapshots that you've gone through and created you can look at the DDA PDFs to see the snapshot name now and you can see the DDA PTV snapshot file to see additional information associated with it. It's more distinct for that particular container. Next slide.
05:44
Some additional options we have with data pop and these are really nice when we start thinking of moving our data.
05:51
If you wanted to go out and allow your table spaces to stay in a read only mode. During a transportable table space import, you can just really give you a good amount of good benefit because now
06:02
I can have that paid a table space data file being mounted on two different databases.
06:07
But it would have to be remaining in a read on the manner in order to do that, we also have to be careful because your source and the target database has to be on the same Daylight Saving Time.
06:18
Edition or version, otherwise you're going to have inconsistency in the date and time and that's not going to prove beneficial or even something you would love to do if you're going out and
06:31
Doing this TPS import it goes out in this case and restores a pre 22 data pump type of behavior, what we had in the past. It is similar, but it's better
06:46
Mechanism today than what we had in the past.
06:49
We could also go through an able skipping of the rebuild of our bitmap when we're going out and doing some work. So I can reclaim my free face if I wanted to, but we can skip that.
07:01
The reason we want to do that is so that as we're moving or going out and doing our work. This just gives us a little bit of a advantage in reducing the amount of time that would take to go out
07:15
And manage these particular types of objects. So it would leave your table space in a read only your maps. We're not going to be rebuilt at that time.
07:24
Your time zone if they are different. You could then see that the time zone column would be dropped because we're not going out and doing the readout.
07:32
I can go through and opt out of running my closer check again looking at data pop when I'm going out and working with transportable table space or a full transportable export this gives you increase availability, because you're not taking the time for the closure checks to go out and
07:53
validate your particular data that's moving. Now you would want to do this if you know that you're not going to have any issues. If you are unsure about, then I would consider consider going out and using that close close to check anyway.
08:06
You can also go out and suppress your encryption clause on your import
08:11
This is very beneficial when you're looking at Oracle Cloud migration. If you have non cloud databases out there that you're migrating that have encrypted clauses.
08:21
So you can tell, or encrypted columns. You can tell the system that you don't need to worry about that.
08:27
When we're looking at resource usage limits for data pump. This is something that can be very helpful. I can go through
08:35
And control the number of jobs that get started when I'm going out and working in a multi tenant environment.
08:42
On those databases. I can also go out and control the number of pillow workers that I would have available for individual data pump jobs within the multi tenant environment. Next slide.
08:58
All right, we're going to take a quick look at security enhancements out here.
09:02
Some things that become important for us, again, is if you do have database vaults out there. We want to be able to go out and have more controls about being able to access data.
09:14
From the database database vault level when I'm looking at the CDB before we didn't really have this level of control. Now we can go out
09:24
To the database balls operations and protect sensitive data at the PDP level and we don't have to go out and create
09:33
Realms or command rules that we had to do before to go out and protect that data. So it really frees up the process. I'm doing this in a more intuitive manner for the protection. Next slide.
09:47
We also have when we're looking at the database evolves operations. We have the ability to go out and grant that DVD or the database small owner of that role locally in that CD. The route.
10:00
So this gives us the ability to have common users out here to be able to do more work within the database balls operation controlling mechanisms.
10:11
So if I'm going to go out and do this. I can go out and configure my database balls and force the local DB owner to be true out here. That's what we had an 18 seats for we want to do is go out and now in 19 see add in more options.
10:25
So we feel go out and grant that, but we also have the those particular users, being able to do more activity which may be beneficial. Depending on how you set up your environment.
10:37
Looking at the database vault command rules out here, we have
10:42
A way to go out and have a unique identifier when we're looking at the rules that we go out and create before we didn't have this. We didn't have the ability to have a way to distinctly define that. So now I have my my command.
10:55
I have my object out here that is going to be part of that the definition of that particular policy and we see that in red. When we look down at the bottom. Next slide.
11:09
We also go through and we would have auditing or the option to go out an audit top level statements, only that allows you to go out and just look at the direct information and not secondary type of auditing. So in this case, I'm looking down at the last two
11:27
Fields that we have out there are blocks. What we're doing is minimizing those audit records.
11:32
In that case, if it was something direct I would be going out and managing that from an auditing perspective, but it is something that I'm calling from a particular procedure, we're not going to go out and audit. It is not a direct statement.
11:46
Next option or. Next slide.
11:49
One of the things we had an agency as well as 12 see is the ability to go out and capture privilege analysis, meaning I want to see what the users.
11:58
Privileges like that have and which privileges. They don't have
12:02
So, and, and that they're not using. So what we want to do is go out and have that available now as part of the Oracle Database box. And it's also available as
12:13
Within the database and database Enterprise Edition, something that is really helpful. Now, instead of having to do it in a different way.
12:20
going out and looking at your operations on Oracle managed and user managed table spaces. When we thought them encrypted and the TD
12:28
What we're doing out now is being allowing operations on an encrypted Oracle managed table spaces.
12:35
When we're going out and working in our PD our PCB and the CD. So we can use our options now to go out and move with our key star migration and we can do this for all of the data, except that would go out and work with that encrypted mated metadata inside
12:56
We can also go through in 19 CD and allow the database to recognize when you have a password file location change right now. We can't do that.
13:06
In 19. See, we can go out and run and alter system flush password file metadata cache to let it refresh the cache. And now the database knows that's a particular password file has changed. Next slide.
13:20
And look at some availability enhancements out here when we're going out and working with our recovery catalog recovery catalog as part of your arm and
13:30
System. So 19 see out here when I was using the recovery catalog. I would have to access the container database to go out and access that target for the backup of a PD today. We can go directly to that TDD, which is what we see on the second point on our slide.
13:48
Next, next slide.
13:51
When we're looking at 19 see we have new compression available. The standard which is going to be more beneficial to us and we're looking at larger data we have that in addition to what we had before. Next slide.
14:05
When we're working with Oracle Database of flashbacks.
14:09
What we have now in 19 see is the ability to go out and have an automatic purge of replaceable flashback logs those logs that we don't necessarily need before we would have to go out and do this on our own. The process would run, we might run out of space. Today, we can go up an automated
14:27
Next slide. We also have the ability to go out and this is a very, very good option is I can go through and
14:36
Provide my restore points that I might have on my primary systems over to all of my standbys before we could not do that. This really helps you when you're looking at maintaining your primary in coordination with your standby. Next slide.
14:55
Next slide.
14:57
Alright performance enhancements out here.
15:00
We have the ability to go out and work with them optimize roasters we had this with 18 seats. But what we had a problem with was being able to ingest the amount of data that was coming in very, very quickly. This is something we see in big data.
15:14
If I've got a lot of streaming data coming in. I need to be able to process it as quickly as possible in 19 see we are going out and
15:23
Being able to manage that data a little bit faster. So, it supports those
15:29
high volume mall types of transactions that are coming from. Maybe your phone. Maybe the internet or wherever it's coming from the problem is we still could end up having some data loss.
15:40
Inserts might have to be retired retried but we are going out and managing it faster. This will improve over time.
15:48
Next slide.
15:52
configuring your automatic add em or automatic database diagnostic manage ability process or
16:00
Or report to go out and be at the PDP level, we did not have the ability to go out and get my AWS or my automatic
16:10
ADM at the PDP level. Now we can. And this is a very, very good tool because many times when we're looking at troubleshooting our data or our database. We want to be able to look down at the PDP level, especially if I'm dealing with application data. Next slide.
16:33
Another piece as far as going out and managing the way that we do our work.
16:39
One of the things that are extremely beneficial for your sequel developers is now I do not need to give my sequel developers to select catalog role.
16:49
In order to go out and do some troubleshooting on there are debugging on their particular code that means they can run something called the
17:00
People monitor and I didn't get that information. I don't need to go out and ask the DDA to have a select catalog role to go out and have permission to certain data dictionary objects because I no longer need them. This is a great bet boo to our developers. Next slide.
17:18
When I'm going out and looking at sequel execution plans comparing them out here.
17:26
Before in 18 See I had different methods of going out and comparing those plans. I could do lots of different work, but today we can move that slide up. I don't know if you can move the page up a little bit of Lori. I'm not sure. But today, what we can do.
17:43
Maybe it's not going to come up to, oh, that's okay. Today, we can go out and lift different first her cash objects, not just to, I can go out and compare mini and this
17:55
Takes over for what we had before. This is really good because now I have a larger set of data that I can go out and compare at one time, instead of having to do multiple passes to go out and look at that comparison. Next slide.
18:15
Going along and looking at some of our availability out here.
18:20
When we go through and we start populating data into memory, we need to kind of understand
18:26
How long it's going to take to get that data out there. We'd like to know about that particular priority that we set on the data and how quickly it is getting out to be in memory. So we can go out
18:38
And look at the populate wait to find out.
18:43
How that data is being populated in memory. So you can have it by different percentages on how much data you're looking for.
18:50
To determine how long it's taking to get up there. We have different return codes that would come out zero is going to be a populate success within the timeframe that you set set up.
19:00
One would be saying I'm out of memory, and so on. So we would need to kind of look at this, over time, and it is something that you're definitely setting up in memory, you would want to get some idea about what is your typical allowable time that is going through and
19:19
And managing on your system. How much time it's taking for your data to get into memory.
19:26
Next slide.
19:29
If you go out and you're working with developers or companies that go out and do use Oracle replay. If you're going out and doing the database replay.
19:41
What we can do now is go out and work with this at a PDP level not working with the entire container before we could not do that. That's where you saw this oracle
19:51
Or 20222 that says we're going out and you can't do this within a package where the TDD is not allowed. Can't do that now we can go through and do it at that level. Again, this is something that
20:05
When I work with good developers. They really like this, especially if you're using database replay.
20:12
Next slide.
20:15
When we start looking at going out and working with supplemental log logging when you're working with your subset of database.
20:21
Replication, you would have your enable for Golden Gate replication to be true. And now I have the ability to go out and control supplemental logging.
20:30
Depending on what I might need. So for tables out here from minimal supplemental logging for tables with no column a supplemental logging for tables or have no replication, you have options now that are a little bit broader and more easier to control. Then we have for. Next slide.
20:48
For fine grained supplemental login, you would still have that enabled Golden Gate replication.
20:53
But you can go through and then display well as well. The same types of things that we were talking about in this case what I would do is I would say create table and then maybe disable logical replication or alter table and disable logical replication, depending on the amount of
21:10
Logging you want this can really be helpful as you're going out and doing some of your replication. Next slide.
21:19
We're going to look a little big data out here and data warehousing
21:23
We have a automatic indexing tasks. Now, which is great help when we're looking at going out and working with this big data out here when we have an automatic tasks.
21:33
That you can go out and invoke when we're going out and working with those big workloads.
21:38
That are running. So we can go out and create and re enable or rebuild disabled drop and make indexes invisible. If that's what you want to do. You can also get a nice
21:48
Information about the impact of those particular in indexes that you might have. And it does really do.
21:55
You some good as far as improving the performance of your DMs with your triggers because it's automatic. At this point it's going to reduce your resource overhead that you would normally use. Next slide.
22:12
Again, a little bit more information on that. If you are going to go out and do your automatic index indexing out here that you have some views that you can go out and take a look at
22:21
To show you the different executions out here of your tasks that ran what those actions were that went out and actually work performed and you can have various
22:32
Information about the different actions from the sequel statements that were going out and being verified by your auto indexing. Next slide.
22:42
If you're working with a hybrid partition tables. Usually we're thinking about big data in this time what what I'm looking about is how I'm going out and managing that data. So I can have my partitioned out here.
22:56
From not just the database tables, but also from my external tables on my limit system. So it can be a Hadoop system, perhaps
23:05
That I need to go out and have partition. So now what we're seeing here is the ability to go out and define whether a partition is external as well as
23:16
It is local or from the database tables themselves and how I set this up with the determining would be determined based off of where the data is. Next slide.
23:28
If you want to go out and have some of this hybrid partition tables we put in memory. You can, but we have to be careful about what we're doing. So my external partition.
23:38
Would be in the buffer cache typically my internal partitions would normally be out there in memory itself, but they can work together in order for the system to keep track of where that data is when we're going out and managing it through our code.
23:56
Next slide.
23:59
If you want to go out and manually populate your in memory external tables, you can. I can simply go out, we have that in a teensy
24:08
I can populate a particular object out there. So it's an external table I manually doing it. That's where the way we would have had to do it in 18 see
24:16
When I'm looking at 19 see out here, I can go out and and query that external table to populate it in here just similar as we would do for an internal table. So it's more like that.
24:28
Internal table where I'm querying and then having that data put into memory. So it gives you a little bit of a better performance. Next slide.
24:37
When we're looking at query that data in parallel in memory for external tables, when I'm looking at 19. See, I would simply go out
24:46
That my query rewrite and my integrity to be spelled tolerated for the data. I'm going to go out and to find my parallel auction my hint.
24:54
And grab that and then I can select that information from my cursor to see what was going on up above, we simply we're going out and altering the session and set clear a rewrite integrity to be still.
25:05
Tolerated but I didn't have the ability to go out and use the parallel statements.
25:10
Next, next slide.
25:13
If you're going out and working with big data or data with some other areas, then you can go out and have support for that data when we're working with Oracle loader.
25:25
Or Oracle Data pops in 18 See we could work. The loader. Now, when we're looking at 19 see in addition to the loader.
25:32
Component. We have the ability to have data of a type Oracle hide or Oracle Big Data, so it is really
25:40
Very, very good for that fast streaming data that we have coming in. We saw some options out there for fast ingesting through the meme, and then off store as well for going toward a lot of that.
25:53
Type of support for that very large data that we do have to manage. Next slide.
26:02
All right. Some diagnosing type of enhancements that we have for your incident, what we can do as here and have an automated service component for going out and grabbing the information
26:16
For your incident and being able to send it up to Oracle really really quickly. Now that is part of what we're talking about here, as far as your talk prediction. Your detection and your succession.
26:28
So you as an individual or the DDA as an individual doesn't have to do all of that work.
26:35
We can quickly get that information together. We already have incidents created, but we can then go out and send it to Oracle and also get some information from our system to diagnose it and repair it very quickly. Next slide.
26:51
The sequel test builder export procedure is something that gives us a new control option. So before I go out, if I'm looking at a test.
27:01
Case builder and I could get compiler traces, so I can see what was going on in that particular query.
27:07
But in 19 see we get a lot more benefits we have more control options to go out and manage the parallel execution code generation, I can get traces different levels of information.
27:19
When I'm going out and looking at issues, I can go out and specify what type of issue I'm looking at performance or wrong results are compiling execution. I can also go out and look at the test case builder to
27:35
Define what data, it is that I'm looking at, because it's going to be more specific, according to the PD where it's working within
27:42
And we can go out and also test the builder. It's going to ensure that your export files aren't going to have any security related information in it like passwords or information that it would have gotten some other PD beats.
27:57
We would have a default directory that would be associated with that object and that test paste that you just ran so very good mechanism.
28:05
Looking at this sequel diagnostic and repair. Now what we have. If I'm looking at what I had an HTC and before I have created a diagnostic tests I could set a tuning parameter, I can go out and execute the diagnostics tab.
28:18
Task, and I can go out and next steps of sequel patch. Those were all individual pieces in 19 see we can have this all combined into one.
28:26
So we have this incident ID, which we were talking about before. So if your incident. It didn't
28:31
Already exist, then the DMS ADR is going to create that influence for you.
28:38
So, especially if you want to go out and give this information to Oracle sports. I can define the sequel text. The problem scope that I want to have associated with it. In this case, it's going to be performance.
28:49
spoke out here, the comprehensive type of a review and we want to be able to apply a patch. If it's given to us automatically
28:56
So this is something that you really combine for we had to do in many pieces and to one. So it really does improve the time period, it would take to go out and do this type of diagnosis and repair. Next slide.
29:17
Lori. Can you hear me.
29:27
Lori. Can you hear me.
default user avatar Lori Teskey
29:28
Yeah, support for multiple PB
29:33
P DB shirts in the same CDB. Do you see that slide.
default user avatar Barbara Waddoups
29:39
No, I'm seeing people diagnostic me care.
29:51
No.
29:53
I don't know what's going on. I could do that just losing connection.
30:00
I might buy BAE Systems going out and reconnecting one second.
30:05
Okay.
30:15
What's going so well.
default user avatar Barbara Waddoups
30:30
Alright, um, what I'm seeing now is the charting enhancements.
default user avatar Unknown Speaker
30:33
Yes.
default user avatar Barbara Waddoups
30:35
Okay, great. Already, we're good. Thank you so much for waiting. Next slide please.
30:42
Okay, looking at starting starting. It's kind of a, I would. When you're thinking of the database is somewhat new. It's been around for a little bit, but a lot of people, it is new for them.
30:52
So in 18 see when I'm looking at starting, I could only have one table family per a shorter database in 19. See, we can actually have more which is a nice option out here. So you could have
31:03
Data in the same chance when I'm looking at the starting component or I could have data from
31:08
For different applications in one particular started database, which is very beneficial for activity or for your applications and we would have our system manage databases only and we would have a new
31:22
Option to go out and configure the table families to enable some of this next slide.
31:29
In 18 see when we were looking at going out and working with our shards out here. You could have a short in a short catalog.
31:36
Be within a particular or for a particular PD be in a container database and we would have only that dash CTV option, it was
31:46
Literally at that level when we're working at a 19 see we have PTB shards and the shark CATALOGS THAT CAN BE WITH DIFFERENT charter databases that that could go out and reside in your container database.
31:58
So now we have the ability to go out and at CDs and PTV shards out there to your configuration. So it gives you more options for better management, especially of that data that you have in the Shard. Next slide.
32:13
We also have the ability to go out and generate unique unique sequence numbers across your SARS in a more
32:21
Advantage away before we had to go out and manually create and manage those unique sequences that are going to be within my shards that took a little bit of time to do that.
32:33
Today we have a unique type of sequence numbering that's going to be handled by that charted database.
32:40
So we have a new sequence object that the find whether it's a shark or no shark. That's what we see down at the bottom in red. So you do have much easier way to do this, then what we did before. Next slide.
32:56
We do have support for multi shard query coordinators on those shard catalog standbys again something that is a little bit nicer than we had before, because before
33:08
Those multi chart queries could not be performed out there on my primary shard catalog database. I mean, that's what the only way it could be find on. Excuse me.
33:17
When we're looking at 19. See, we can go out and use that to perform
33:22
On your active Data Guard standby, which is again a nice way to go out and use your Data Guard environment to go out and use it as a support for queries on your system.
33:34
improves your scalability and availability and it does have a coordinator service that's associated with it. It is a from a scope of affinity its regional and manner. Next slide.
33:49
When we're looking at going out and managing the parameter settings across our shard in 19. See, this is going to be centrally managed out here.
33:59
And we can propagate those particular settings that you have created across the shard, or from your shark catalog to the various others, we can use them enabled shard DDL again. Also on that shard catalog. Next slide.
34:16
Looking at Oracle. See, this was a very, very quick review of what we have out there. What I would highly suggest is that if you have
34:29
Customers who want to go out and get more information about 1980 take a look at the classes. Also, you can look at to do which are training on demand. They give you that same option to be able to see a recorded
34:44
Class, or a recorded set of information from an instructor that is going to give you the same information that you would give them the class.
34:53
Students get a lab environment that they can set up to be run anytime that they would like. They can look at the to do
35:02
Over a period of time they get a lot of opportunity if a student has a problem with a to the lesson they can always go out and make a request to talk to an instructor
35:12
Instructors are required to go out and report back to that student within 24 hours. So it's a really, really nice way to go out and get some
35:22
Information to understand about the ITC systems, we don't have anything outside of the new features associated with 19 see from a direct plan. But if you go through and look at the ATC.
35:37
Classes where you've got administration workshop or managing the multi tenant, those would be something that would be directly associated with a teensy
35:45
Some of our more agnostic classes are not going to have a particular edition or version set up against them. They are going to be good for
35:54
Anything up to the 18 see so as 19 see becomes more fluid out there in the scope of what users are going out and doing will probably see much more coming about or 19 seats.
36:12
All right. That is my lecture component I did see that there was a couple of questions out there and I just going to hand it back over to Lori out here to work with those students or those sorry
36:27
Members who are going out and listening so that we can go out and address that. I apologize. I'm teaching today also
default user avatar Lori Teskey
36:35
Thank you, Barbara and I apologize for the technical difficulties today. I think we've lost our Michelle our usual webinar host, but we will keep the line open for another couple of minutes to entertain questions from anybody. I do have one question here in the chat. Barbara from smear.
default user avatar Barbara Waddoups
37:00
And he says,
default user avatar Lori Teskey
37:01
Can we perform all PDP related enhanced features from OEM
default user avatar Barbara Waddoups
37:09
That's a very good question. And actually, not everything. Yet it is going to get pushed down there, but they're not all enabled at this time.
37:22
Okay.
default user avatar Lori Teskey
37:23
Any more questions.
37:27
Will hold the line open for a couple of more minutes. You can put your questions in the Q AMP. A or in the chat.
37:39
And for those that you have attended today. This webinar is being recorded.
37:45
Tomorrow, an email will be sent out with a link to the recorded version. So you can view at your leisure leisure and take advantage of the links to some of the courses and training. We offer for Oracle
38:35
Okay, well, I don't see any more questions popping up. So, Barbara. I don't want to take any more of your time to all the attendees. Thank you again for attending our session on Oracle Database 19 see
38:50
Thanks for suffering with us for the technical
38:53
Difficulties and
default user avatar Unknown Speaker
38:54
Thank you, Barbara.
default user avatar Lori Teskey
38:57
Will be sending a link
default user avatar Barbara Waddoups
38:58
To upcoming
default user avatar Unknown Speaker
38:58
Tomorrow,
default user avatar Barbara Waddoups
39:01
Thank you for everybody to attend as well.
39:04
Too. Bye.
default user avatar Unknown Speaker
39:05
Bye.