Mathew

A new discipline in the era of SGE and E-E-A-T

With the rise of large language models (LLMs), mass-produced AI content is becoming more prevalent and the risk of incorrect information spreading also grows. Thus, it is increasingly important for search engines and answer machines to identify trustworthy and authoritative sources and weed out all others. This recent evolution in SEO requires new tasks, skills…

Read More

Can Small Language Models Give High Performance? Meet StableLM: An Open Source Language Model That Can Generate Text And Code Providing High Performance With Proper Training

Stability AI is a startup in the field of artificial intelligence best known for its Stable Diffusion image-generating AI technology. Today it has introduced a new free and open-source language model called StableLM. The model is offered in three different parameter sizes for the Alpha phase: three billion, seven billion, fifteen billion, and sixty-five billion….

Read More

Eleuther AI Research Group Demonstrate How Classifier-free Guidance (CFG) Can Be Used With LLMs

Recently, huge language models have shown impressive generative skills, allowing them to handle a wide variety of problems. Typically, “prompting” is used to condition generation, either with task instructions and context or with a small number of samples. However, problems, including hallucination, deterioration, and wandering, have been observed in language generation, especially with smaller models….

Read More

July 2023 Google Webmaster Report

If I had to sum up this month, I’d call it the month of way too many unconfirmed Google search ranking updates. I wrote about five of these, but it felt like there were even more. Google pushed a quality update for its Search Generative Experience. Google dropped a ton of SEO topics and finally…

Read More

Microsoft Researchers Propose a Novel Framework for LLM Calibration Using Pareto Optimal Self-Supervision without Using Labeled Training Data

Recent developments have seen a remarkable increase in the capability of large language models (LLMs), with generative pretrained transformer (GPT) models showing significant promise. The transition from GPT-3 to GPT-4, as well as the appearance of other LLMs like PaLM and LLaMA, demonstrated a considerable improvement in problem-solving and natural language understanding skills. Additionally, generative…

Read More