[Announcer] This conference will now be recorded. [Deana Crouser] All right, good morning, everyone. And welcome to another EcoFOCI seminar series. I am Deana Crouser co-lead of the seminar series with Heather Tabisola. This seminar is part of NOAA's EcoFOCI biannual seminar series. Focused on ecosystems of the north Pacific Ocean, Bering Sea and US Arctic to improve understanding of ecosystem dynamics and applications of that understanding to the management of living marine resources. Since 1986, the seminar has provided an opportunity for research scientists and practitioners to meet, present and provoke conversation on subjects pertaining to fisheries oceanography, or regional issues in Alaska's marine ecosystems. Visit the EcoFOCI webpage for more information at ecofoci.noaa.gov. We sincerely thank you for joining us today, as we continue our all virtual series. Look for our speaker lineup via the OneNOAA seminar series and on the NOAA PMEL calendar of events. Did you miss a seminar? Catch up on PMEL's YouTube page. It takes a few weeks to get these posted, but all seminars will be posted. Please check that your microphones are muted, and you're not using video. During the talk, please feel free to type your questions into the chat. We'll be monitoring the questions and we'll address them at the end of the talk. And today I am pleased to introduce one of our newest team members here at EcoFOCI, Emily Lemagie. Emily is a scientist and principal investigator at NOAA's Pacific Marine Environmental Laboratory, leading studies of the dynamic relationships among climate, fisheries and the marine environment. She completed her PhD in physical oceanography at Oregon State University in 2018. And her research expertise is in estuarian science, river plume dynamics, inner shelf dynamics and science communication. So please give her a warm welcome. And with that, let's begin. [Emily Lemagie] Thanks, Deana and Heather for organizing, and thanks everyone for coming. I think that we're competing with the Fisheries Congress and maybe a few other events as well, so I appreciate the turn out. And so I, yeah, before I begin, I also want to thank my collaborators for this project. The research I'm presenting today was work done with Sarah Giddings and utilized the output from several numerical models, which was graciously provided to me for this project. And the development and validation of those models represents a tremendous effort to contribute to our scientific understanding [audio is choppy]. And I'll start with some context and background to understand the significance of water mass exchange in Alaskan waters. The Alaska marine ecosystem is very productive for unit per area, but also covers a very wide area. And it's a heterogeneous system made up of several connected regions. And the climate variability, the dynamics, the ecosystem response to climate forcing all vary significantly between those regions. So the transport and the exchange and mixing of water masses throughout the system determines the connectivity between those regions and across scales. And that's a key part to understanding each piece of information within the system and the system as a whole and in a global context. So to provide a few more specific examples of water mass exchange through the Alaskan ecosystem, starting with the Bering Strait. That's the gateway between the Pacific Ocean and the Arctic, and thus the conduit for the transport of heat and fresh water between these two regions. Rebecca Woodgate, et al., in 2015, published a synthesis of mooring measurements across the channel. And this figure illustrates how the transport of those key water properties can vary spatially [audio is choppy] this channel. Water transported from the North Pacific is also advected into the Bering Sea through the Aleutian Islands. And this map of cross sections through those channels shows the variability and the depth in the bathymetry scale in the various channels throughout that island chain. And along the Alaskan coastlines, estuary and exchange flows, and we've seen in canyons also play an important role in the cross shelf transport of heat and salt and nutrients between the shelf and inshore nursery habitats. And there's several examples of this. Here are two figures from work in the Gulf of Alaska, by Mordy, et al., and Stabeno et al., demonstrating the importance of mixing in the canyons around Kodiak Island and in Cross Sound on ichtyoplankton and nutrient availability over the shelf. And just a little bit more background here for anyone who may be less familiar with the EcoFOCI program, our long term monitoring and data collection measuring water mass exchange, and also water properties and ecological variability in the basins provide the foundational knowledge and context for understanding how climate change impacts the Alaskan ecosystems. And we rely on these observations for detecting regional and global climate change and its impacts on ecosystems to give context and information for direct and effective management of the fisheries' resources. And as illustrated by the sheer number on the scale of the mooring anchors pictured here. In the expanse of the Alaskan marine ecosystems from the app I just shared, this is a hefty undertaking. So efficiency and innovation are important values for the [audio drops out], to provide quality science and data that's relevant and has sufficient coverage to meet the needs of our many stakeholders and partners. And so the objectives for this research into strategic mooring placement were to compare sampling resolutions and the lateral and vertical dimensions across the channel cross section. And the metric for this comparison was the convergence of the total exchange flow or TEF calculated from numerical modeling results as the sampling resolutions vary. And I'll explain the TEF method in more detail later. But this is a measurement of both the exchange volume and transport and of the water property, such as salinity that's exchanged. And in addition to assessing the impact of sampling resolution, I also wanted to compare the sensitivity of this method to different sampling and extrapolation methods. And that's to assess strategies that might further reduce the number of moorings for the sampling resolution required for the test method to convert. And finally, an objective was for these results to be broadly applicable. And so we could identify sampling strategies and resolutions that would be more suitable for measuring the exchange flow in different channels in order to provide some guidance and best practices for new monitoring activities, and also interpretation of existing data. So a bit of a roadmap for today. First, I'm gonna describe the three channels that I've focused on and talk about the characteristics and why these were chosen for this study. Then I'm gonna introduce the TEF method and show some results from using the full model resolution in the calculation before comparing different sampling resolutions and sampling strategies, and finally summarizing some of the lessons learned and insights [audio drops out]. So this analysis to develop sampling strategies focused on domains from pre-realistic numerical models, all of which are on the US west coast. One is a ROMS simulation of the Salish Sea, which included the Strait of Juan de Fuca. It was run in 2004 to 2007, and I'm using data from 2005 results. It has a 1 1/2 kilometer resolution. So for each of these channels that are shown in red, there's 14 cells across the channel and 40 vertical sigma resolution, sigma levels in the vertical resolution. And there's another ROMS model from the San Diego Bay, which was a used SWAN wave model and had eight meter resolution, so very fine bridge scale. And so, even though the scale is much finer, each of these cross sections have 14 and 16 grid cells across the channel as well, and 10 verticals. And the third model is in the Columbia River. It's a SCHISM model, which is a derivation of the south model and has an unstructured horizontal grid. The resolution was 10s of meters near the coast. And so the extrapolated cross sections here had 41 and 58 cells across, and this model has 37 verticals. So I'll talk about all three of these systems to point out some of the key characteristics, but you don't have to worry about the particular details unless you're interested in the specific system. My point is that these three model channels represent a variety of characteristics, and this is meant to justify the extrapolation to more general hypotheses and recommendations from this sample set. And so for each of these models, I selected two cross sections in order to look at the sensitivity of the results to the specific location. And I want you to note, for example, the curvature in the San Diego Bay. So this section to the north, labeled SDBC, is on a great curve channel, and Saturn 03 in the Columbia River crosses two deep channels that are bisected by a shallow slit. So the range of the bathymetry and geometry within these three systems is also [audio drops out]. And here's just another view to compare and contrast these systems. These are the time, mean over the full year for each simulation currents and salinity. When the scales are different for each plot, but the most relevant detail here is I want to point out the currents shown in the red and blue color scale have sheer in both the vertical and horizontal directions. In general, the flows tend to be positive and that's in inward to the estuary near the bottom, but also on the right. So we're looking into the estuary in the northern hemisphere here, except in the San Diego Bay, the up river section, where there was a lot of curvature. The inflow tends to be on the left. And also if you look at the top left panel here, labeled BH1 in the Straits of Juan de Fuca,, you can see an inflow of relatively fresh water along the surface. And this transient feature is the intrusion of water from the Columbia River, which can occur during downwelling favorable winds. And as a side note, this is a great example of how different regions or estuaries in this case can be connected by circulation of various water masses. And you can see the location of the Columbia River, relative to the Strait of Juan de Fuca from the region. And as for monitoring water mass exchange flows, the variations shown here and the variability in both the horizontal and vertical corrections suggest that one would need pretty high resolution in both the lateral and vertical resolutions to accurately resolve the exchange. And those cross sections showed a time average of the flow and fluidity. I also want to take a moment to talk about the forcing and the time dependence at each of these locations. So this is using spatially average salinity, as an example. These time series are low pass filtered using a built in filter to remove the dominant tidal signal, but the mean tidal range of the high frequency part of the signal is marked with these red bars. And the gray shading is the standard deviation of the spatial average. And to walk through these briefly in order from north to south, there's little seasonal or tidal variation in the salinity in the Strait of Juan de Fuca. And this pattern is similar for both sections. I'm just showing that. The Columbia River had a distinct seasonal pattern, as well as fortnightly variability from the spring neap tidal oscillations and strong tides. And the San Diego Bay is an interesting system, with tidal bay variability, reverse exchange flows, where saltier water outflows near the surface at times and a thermally controlled stratification, which is unique for estuaries. So I've plotted the temperature variability throughout the year here for [audio drops out]. So again, I'm just primarily showing all these figures together, so you can get a sense that the dominant patterns vary quite a bit across this sample set. And so I wanted to give you a flavor of the range of conditions that's represented by this sample set to justify the extrapolation to more general hypotheses. But I also wanna condense some of that information into a summary and put these patterns into context. In 2008 Arnoldo Valle-Levinson published a semi-analytical theory to determine the conditions under which the density induced exchange flow is vertically or horizontally sheared, based on the Ekman and Kelvin number. This is a figure from this paper and from the model simulations of the Strait of Juan de Fuca in San Diego Bay, I can directly compare the Kelvin and Ekman numbers and put them in context of the plot. And so to walk you through this figure, on the horizontal axis is the Ekman number in logarithmic scale. So positive values are where there's strong friction. And as a result, there tends to be inflow in the deeper part of the channel and outflow over the shore. So you get strong, horizontal sheer. And for weaker friction, you tend to have inflow at the bottom layer and outflow at the surface. So you have vertical sheer. And on the vertical access is the Kelvin number in logarithmic scale. So that's representative of the width of the estuary relative to the internal Rossby radius of deformation. So that's with the influence of coriolis. And so for positive values where you have a wide estuary, the coriolis is important. And so you have tend to have flow to the right of the channel, looking inward [audio drops out] here. And so again, you get a horizontal sheer. And so for both of these model simulations, they fall into the part of the parameter space where there's expected to be both horizontal and vertical shear and flow, but for different reasons and different dynamics. And so I'd point this out, cause I'm using this shear as a proxy to hypothesize the sampling resolution that you might need to monitor the magnitude of the exchange. And this suggests to me that these particular systems might provide conservative estimates for sampling resolution because of the complexity of it. So let's get back to what I'm trying to resolve here. Looking at the TEF, finally gonna explain that. This measures, as I said, a flux such as the volume flux, heat flux, or salinity flux, but it's categorized by a tracer rather than a location. So in this case, I'm using salinity. And to dissect this first equation, the currents normal to the section are integrated over an area of the cross section. But that area is defined in terms of the given salinity, and the salt flex and the volume flex are calculated in the same way. And here the angle brackets represent the low pass filter, if you're looking at time series so that you examine the subtitle exchange flow, or to look at the mean that can also just be a time average. And in order to make this calculation in practice, then, you have to measure the normal velocities and the tracer values at discrete areas, at discrete and known depths so that you also know the area associated with that measure. And in a more recent refinement of the TEF method, the exchange profile is divided into different salinity area levels using a maximum finding. And this figure from Lorenz et al., demonstrates this dividing salinity method. The exchange profiles of inflow and outflow are plotted against salinity. And the extrema of the profiles are used to identify dividing salinities and different layers of that flow. And a final step that I've also used in this analysis is to sum the net exchange over all of the inflow layers and all of the outflow layers in order to calculate a single set of exchange flow parameters. So Q in, S in, Q out, S out, that are averaged over the full year long simulation for each cross section. And so that's what I focused on, this comparison, not for simplicity. And I noted it briefly before, but I also wanna point out that the TEF method can be calculated for tracers other than [audio drops out]. So, in this example, by Niv Anidjar, to analyze the water mass exchange through the mouth of the San Diego Bay in winter, in temperature salinity space, the colors represent the inflow and the outflow. And the plot at the top demonstrates that there are two distinct layers in the salinity exchange. But the plot at the right shows that there are several layers of inflow and outflow across the temperature slide. And so for this study in practice recalculate, the TEF using the full model resolution of the model grid in each case. So the normal velocity and salinity were sampled hourly at the center of each grid saw. And the area also varies over time as seen upper in these models. And the results of the TEF calculated at this full model resolution were assumed to be the correct value. And each other sampling resolution and strategy was compared and evaluated in comparison to these results. So at the Columbia River mouth, there was a two layer exchange flow with oceanic salty water coming in near the bottom as we saw in the figure earlier, and then fresher water outflowing near the surface, but this time, but you also see that the fresh water convert a larger range of salinities in the outflow, and the time series are colored to show you the magnitude of those exchanges throughout the year. And in the Strait of Juan de Fuca, the exchange is predominantly two layered, but from this time series in yellow, you can also see evidence of down rolling favorable wind events, where the fresher water from the Columbia River enters the straits. And at the mouth of the San Diego Bay, sorry, the exchange is also two layered, but there's a reversal where the outflow at the beginning of the year is fresher than the inflow, and the outflow in the rest of the year is saltier. So outside of the modeling frameworks, resolving the current salinity or temperature over that whole cross section would be a large investment, and it's not realistic at the same scale in application. So the goal here is to explore how well this TEF technique can be applied for minimal discrete samples. So to test this, we sub-sampled the model results using artificial moorings, first that were evenly distributed across the maximum width of the channel. At each mooring, the number of discrete samples was taken, which were also evenly distributed across the depth of the channel at each mooring. So the area represented by each sample was calculated using the channel bathymetry for the whole model resolution. Otherwise, if you'd made just a rectangle that was the same width, there's a bias along the channel edges or areas with deep gradients and bathymetry if the width was large relative to the slope. And so two approaches for calculating the area were compared, one, if you assume that the model fields were constant over signal errors, and another if you assume the model fields were constant with depth. But there was no significant difference between those two approaches. And so this really simple geometric strategy of evenly distributed moorings and samples was applied to each model cross section. And the TEF using this distributed sampling strategy was calculated for a suite of values of m. And these are the outcomes for the TEF calculations for various m and n. And bear with me, 'cause there's a lot of information on this slide. The horizontal axis for each of these spots is the number of moorings used in each test calculation. And the colors are the number of sampled depths. And the black line is the full model resolution test with gray bars shading to indicate plus or minus 10% of that value. So in general, as the number of sample depths increased, showing in the darker shades of red, the TEF results converged to the full value. And when only one mooring was sampled in the middle of the channel, the exchange transport tended to be overestimated, likely due to the strongest flows occurring near the center of the channel. And as the number of moorings across the channel increase the TEF results also converge to the full model [audio is choppy]. So let's just focus on the Strait of Juan de Fuca for one example. So the green numbers here are the minimum number of moorings or sample depths for which the TEF converged to within 10% of the full value. And note, this is a conservative estimate that I used for each of these simulations. So if there was variability around plus or minus 10%, for example, as you switch from an odd number and an even number of moorings, then I took the higher value of [audio drops out]. This was the case in some [audio drops out]. So for the Strait of Juan de Fuca, the model results, the TEF results converged for just three or four depths and three or four moorings. And for all six of the cross sections, the results are summarized here. About four moorings and three sample depths were needed to capture the exchange transport within 10% of the full value. And only two moorings and sample depths were needed to just constrain the salinity. So this suggests that the volume of the exchange is the limiting factor. So I also wanted to see if I couldn't improve this sampling efficiency by using a different strategic approach since the geometric sampling was very simplistic, and it didn't take into account any information about channel geometry and the variation in these simulations or any information and prior knowledge about the flow field. So we used a correlation method similar to the approach suggested in Wei et al., for optimizing mooring placement to constrain air, sea fluxes in the Southern Ocean. And in this method, artificial moorings were located at each of the grid centers across the section, distributed laterally. And the location with the strongest correlation between the sample and the full model time series was selected as the first one. And keeping that mooring location fixed, a second mooring was located at each of the remaining grid locations. And the pair with the strongest correlation to the full model TEF time series was selected as the second mooring. And subsequent moorings were selected in a similar way, all using the full vertical resolution. So as the number of moorings in the sub-sample increased, the correlation between the sub-sample TEF and the full TEF converged to one. And it converged more quickly for this correlation method than just using the evenly divided moorings. However, the number of moorings required for the magnitude of the TEF transport to converge within 10% of the full model was greater, as you can see from mu on the table. It was greater than the simple geometric strategy. And I have a few comments on this. First, the correlation between the full result and even a single mooring sample was high. It was greater than .8 in most of the simulations. And this is likely due to the temporal variability in the exchange flow having a wide spatial signal, so just the tidal signal. So there's similar variability across the whole cross section. And this is opposed to the Strait of Juan de Fuca, for example, where the temporal variations in the exchange flow have a strong spatial signal, such as that associated with those Columbia River water infusions. And so this suggests that areas with high variance in the exchange flow don't fully correspond or necessarily correspond with the maximum magnitude of the exchange flow. And a side note about this method was that the correlation was only calculated with one parameter at a time. So this means that the order of mooring sample to maximize the correlation for Q in was different than the order of mooring sample to maximize the variance for Q out and same for S in and S out. So we effectively ended up with four different variations of how TEF varies with increasing number of moorings. And so we can look at the sensitivity of the results to this particular mooring placement. So to explain what we're looking right here, using the top left panel, as an example, each color corresponds to the transport Q in flowing into the estuary, but estimated using the mooring distribution order calculated for each of the four different correlation methods. And the circles mark when Q ends within 10% of the full TEF bank. So in many cases, the four dots from each of the different correlation methods are closely clustered, and that suggests that the test results weren't strongly sensitive to the specific mooring [audio drops out]. So I've deviated a bit into the theoretical modeling world, because this correlation method would only be useful if you had more samples than you were going to use, and it requires, it relies entirely on a priori knowledge of the system. So I want us, bring us back a little to address real world implications for monitoring test. So let's imagine a case study where we represent something closer to in situ observations, but still using this modeling framework. So here we're gonna go back to sampling evenly distributed moorings across the channel, but at each mooring the salinity is only sampled at a fixed depth near the surface, as from the surface flow and a fixed depth off the bottom. But the velocity is still resolved throughout the water flow, as if there was a bottom mounted acoustic Doppler. And in this case study the structure of the vertical salinity profile between our two samples isn't [audio drops out]. So we came up with two rough estimates for extrapolating the water column salinity at each. So first we used linear extrapolation throughout the water column. And in the second case, we assumed a two layer profile where the salinity is uniform within each of those two layers. So since one would have velocity profiles in hand during the analysis stage, we use the velocity profiles to estimate the depth of that interface between the two layers in the two layer case. In practice, velocity and salinity aren't necessarily dynamically coupled, but further details about the salinity variability may also be known from CTD surveys or other prior sample. And as before the horizontal access is increasing number of moorings for different test calculations. And the black line represents the full TEF results. So in the Strait of Juan de Fuca and the Columbia River, the two top rows, the two layer salinity approximation does reasonably well at estimating the exchange flow. And the results converge within 10% of the full TEF for using approximately three or five moorings. And then the San Diego Bay, both methods of salinity extrapolation converged to the full TEF once in month. And so this study was investigating the results from a bay, a salt wedge estuary, and fjord estuary types. There were different scales and geometric complexity across simulations, but this isn't a comprehensive examination of the whole range of channels shapes, sizes, and dynamics that could be observed. That said, the results match surprisingly well across this variation of models. And we found that the TEF could be well approximated in many cases with less center equal to four moorings across the channel. And even so the simple designs here weren't adapted for specific bathymetry or oceanic conditions. And the only sampling the center of the channel did lead to an overestimate of the exchange transport in all of the cases. And often, if you only had two moorings that missed the center of the channel, you could underestimate the transport. So this may be important to keep in mind in interpreting data collective, [audio is choppy] limited samples. The test can be well approximated in many cases with less than or equal to four sample depths, and possibly even with only two salinity measurements, but it does matter how the salinity is interpolated over the water. And I found it a bit surprising how similar the number of moorings was across these systems, but encouraging given the complexity of the flows that this convergence was so consistent. And also strategic mooring placement based on the high correlation did not reduce the number of moorings relative to the geometric spacing, because the locations of the high temporal variants are not necessarily where a majority of the transport occurs. But of course the value of this knowledge depends on what part of the signal you're trying to capture and what your specific objectives are. And so that's a useful thing to keep in mind, the best practices. And looking ahead, I find the results of this analysis really encouraging. As far as recommendations for best practices, there weren't clear improvements from using the theoretical tools that we tested to predict mooring placement strategies, such as using the Kelvin and Ekman numbers or the correlations, but those relied on detailed, a priori knowledge of the flow. And they aren't particularly useful in observational design, anyway. Instead these results were largely consistent with the approaches to mooring deployments that have been used throughout the long term monitoring program at EcoFOCI. And it suggests that the TEF framework can be a useful tool for interpreting observations. And this may open the door for adapting more of the tools that come out of the modeling and theory focused communities to these observations, where we're able to account for the caveats and constrain the error bars associated with this transition. And on that optimistic note, I wanna open the floor for questions and discussion. [Heather Tabisola] Thank you, Emily. We'll do the virtual clap, which I hope one day will actually be back in person. Folks, if you would like to type your questions in the chat, please do so. We can read them out or just put your name, and we'll call you in the list and you can turn off mute and ask Emily yourself. Seems like a lot of effort, Emily, to plan where to put a mooring. Any questions for Emily? Everybody's quiet today. My goodness. Al, good morning, Al. Okay, moving over to chat. "Would one possible approach be to calculate EOS from the model and place moorings based on that?" [Emily Lemagie] Yeah, Al that's a good question. And that is a prescription that's very similar to the way [audio drops out] where they were using the model and looking at the correlation. So they're looking at correlations within the model to place moorings for observation. And so that is the common approach. And what I found striking here from these results was that when you're monitoring flow in channels, a lot of what is intuitive, like monitoring if the flow is near the edge of the channel or near the center of the channel can also be a very good guide for capturing most of the exchange flow. And so in the channel flows, I think that requiring a model and some complicated analysis, acquiring analysis from a lot of a priori knowledge or investment of developing a model isn't necessary to kind of apply this knowledge of, is the channel wide, does it have really strong currents. You can use kind of these basic characteristics to still do pretty good. [Heather Tabisola] Thank you, Emily. Ned has a question as well. "What if you only have funds for two moorings at two depths?" I like the real life application, Ned. "Can TEF help with that?" [Emily Lemagie] Well, I think that that's... The point of the case study was to look at that. If that's all you can do the, if you have, I guess, at the depths, it depends. So if you have one ADCP at the bottom, you can resolve the [audio drops out]. And so it's whether or not you're able to sample salinity or temperature at multiple depths in order to look at the characteristics of the water that's being transported. And from many of these channels, if you spaced out two moorings and missed the center of the channel, you came up with an underestimate, but you can use that information then to apply, perhaps, a correction or estimate how much that underestimate would be, [audio drops out] similar systems. [Heather Tabisola] Any follow up question with that Ned, or is that good? Just wanna make sure. I'm gonna assume good, but I'll come back. Jim also has a question. "Is Bering Strait or even the Aleutian passes in geostrophic aguessment?" [Emily Lemagie] Yeah, that's a good question. So in the Bering Strait, for example, you can see that the, there was, I guess I need to go back to this slide, but there's a spatial gradient with the flow and the specific water properties on one side of the channel. And so when you have a wide channel where the coriolis is important, you have the strong, horizontal sheer. And so that's... When you're monitoring one of those flows, you wanna make sure that your mooring is situated so that you're able to capture the important water properties and where the flow is strongest. And so it depends. I mean, there's variability across those Aleutian passes in how wide they are, how deep they are, how you would strategize. [Heather Tabisola] Plus ship traffic and all those fun things too, right. Chris has a question. Morning, Chris, nice to see you here. "If you had a single instrument that could give you about one to two meter resolution profile of temperature, salinity and velocity, would that enable new science? Any value in real time observations?" [Emily Lemagie] Yeah, that's a great question. I think one of the biggest challenges for these passes with strong currents, so that are deep and particularly ship traffic is the surface measurements. So a lot of our historical measurements where we have an ADCP at the bottom, you're also able to put temperature and salinity measurements at the bottom, but you don't capture the gradient and temperatures in D. So that case study, which even had two salinity sensors at each mooring, we don't always have a surface sensor. And so if you could get TNS, even at the surface, let alone throughout the water column, then there's a lot of value in understanding which water properties are being exchanged. And not, so knowing the transport, you can sometimes extrapolate that, but understanding if there's strong stratification properties, then to know what the actual heat flux is for the fresh water pumps, still, also, maybe this surface measurements. [Heather Tabisola] And Ryan, I didn't mean to skip you. You popped up right at the same time as Chris. So Ryan says "not to get too far into the weeds, but if we miss the top and bottom Ekman layers with a ADCPs, any idea how bad that might get regarding exchange estimates?" [Emily Lemagie] Yeah, that's a great question. I think that that is you put the nail on the head with another one of the big concerns with our ability to monitor these flows is in shallow water conditions in particular, you often canvas the top and bottom Ekman layers. And that was part of another study I looked at on the Oregon shelf from comparing different extrapolations when you do miss the surface velocity. I think that some of our, I found that some of our assumptions about the across shelf transport and some of the ecological hypotheses that were based on those assumptions of falling and down rolling could be misguided is the, the surface Ekman layer could go in the opposite direction as the wind-driven Ekman layer that was predicted. And so there are conditions where that can be very important. So measuring the surface salinity and temperature, as well as measuring the surface layer currents and the bottom layer occurrence is a critical step. And it's probably the most challenging. [Heather Tabisola] And then Wade just made a note to Chris. High vertical resolution profiles would help modelers develop missing parameterization schemes. And, that's also limited by ice location, all those wonderful things at the surface, too, being able to get that. Alright, Ned has another question or comment. "You might be able to get TNS at the surface from satellite measurements. Cloudiness is always an issue and combine that with bottom mounted ADCP, and then you could have sort of a virtual hybrid mooring." I guess that's a comment. [Emily Lemagie] Yeah, so that's a great idea. I think one of the challenges with that when you're looking at small scale or extra exchange is the tidal variability in those water properties. The TEF method, for example, captures that tidal dependence, and those properties can change on a higher frequency than what is averaged at the satellite. [Heather Tabisola] And so Emily, I wonder too, versus Columbia River where you were working, where this kind of study occurred versus in Alaska where we often use drifters or other tools to get at some of those, right. It's not a one always that one size fit all. [Emily Lemagie] Right. [Heather Tabisola] As Ned said, everything's sort of a hybrid. Nothing's really a one-off in what we do. [Emily Lemagie] Right, yeah. And so I think one takeaway, I ended on kind of an optimistic note, but this measuring the surface and bottom Ekman layers and measuring surface water properties, getting that at a resolution, if tidal variability is important, those are some of the key needs moving forward. But not having that information in all of the cases with historical modeling or monitoring, we also have some tools to interpret and think about that data. [Heather Tabisola] And is there any, you think room in here? Do you think this is more purely physics and understanding, or is there space for indigenous knowledge and historical references to include into these sort of your models and planning? [Emily Lemagie] Yeah, well, when you're monitoring a new area where there's more traditional knowledge than there are scientific measurements from NOAA, then that knowledge is really important for characterizing where that flow is. Often you can get information about what side of the channel the inflow is or the outflow is. And that information is, the local knowledge in these smaller scale local estuaries where there are a lot of ship traffic and people are familiar with those waters, that information is really crucial. [Heather Tabisola] All right, it is 10:51. Are there any other questions or last thoughts maybe, Emily, that you wanna leave folks with, [audio is choppy] that opportunity? [Emily Lemagie] Yeah, I'm happy to take any more question if there is some, but my final thoughts is I think that this was a really great discussion. I think that you guys have done a good job of identifying what the priorities and needs are for future monitoring direction and development. And yeah, I think that these kinds of tools combined with new technologies and approaches to be creative about answering these questions is just has so many opportunities for improving our knowledge and interpretation of these systems [audio is choppy]. [Heather Tabisola] Awesome. Thank you, Emily. It's really nice to have you and hear you speak today. And if there aren't any other questions, we will wrap this up. And thank you everybody for joining. Deana, remind me. We do have another presentation next week, right, or do we have an off week next week? [Deana Crouser] I believe we have an off week. Let's see. [Heather Tabisola] Wait, it might be Peggy's talk next week. Okay, she's-- [Deana Crouser] Oh, no, we have, I think it's Laurel. [Heather Tabisola] Oh, Laurel's next week. Okay, good. Woo. Yes. Great, we're back here next week with Laurel. And yeah. And seminars are recorded. Won't be posted for a bit. So we'll keep you updated on that as well. Thank you everybody for joining us today. [Audience] Thanks Emily.