The wordEmbeddingLayer class is part of Text Analytics Toolbox and was introduced in release R2018b.
Perhaps that blog post should have mentioned that both Text Analytics Toolbox and Deep Learning Toolbox were required to run the examples it contains. IMO one of the purposes of some of the posts on the MathWorks blogs and some of the examples included as part of the documentation is to show how two or more of our products can work together to solve a problem that doesn't necessarily squarely fall into one or the other of those products' areas of focus.
There are several Examples in the R2018b DLT Help which use it, but they do say that the Text Analytics Toolbox (TAT) has also been used (which is a little bit uncool, IMO :-)) .
So should every layer suitable for use with Deep Learning Toolbox ship as part of that toolbox? If the wordEmbeddingLayer class uses functionality from Text Analytics Toolbox that is not directly related to the network architecture, should those functions also be included in Deep Learning Toolbox instead of or in addition to Text Analytics Toolbox? How about (looking at the list from release R2018b) the dozen or so layers that are part of Computer Vision System Toolbox? How much of that toolbox should be shipped as part of Deep Learning Toolbox in addition to being shipped as part of Computer Vision System Toolbox?
If you look at the list in the most recent release even more products are shipping layers that are compatible with the infrastructure in Deep Learning Toolbox: Lidar Toolbox, Computer Vision Toolbox, Text Analytics Toolbox, Reinforcement Learning Toolbox, Signal Processing Toolbox, Wavelet Toolbox, and Image Processing Toolbox.
The "List of Deep Learning Layers" in the R2019 DLT Help says "wordEmbeddingLayer (Text Analytics Toolbox™)"
(so why is it listed in the DLT at all?)
Because it is a layer that can be used with the infrastructure that is part of Deep Learning Toolbox to create deep learning models.
Finally, if this wordEmbeddingLayer is really available only in the TAT, are there any "work-arounds" or replacements which would work in the R2018b DLT, without the need for the TAT?
It is possible to define custom deep learning layers. I'm not sure how involved it would be to create a layer that's the equivalent of wordEmbeddingLayer.