Today, Google released an update to its tool for helping tenure-starved researchers and cunning scientists gauge the relative “impact” of scholarly, peer-reviewed publications, and cite the shit out of them. The new tool is called Scholar Metrics, and it utilizes some Google-y algorithm that takes into accounts clicks and some other random proxies for readership, spitting out some ranking system. So there’s a bit more nuance to it than this, but unlike competiting tools from Thomson Reuters, Google appears to have crafted Scholar Metrics with the primary goal of helping researchers find landing spots for their articles, and has only a fleeting concern, at least branding-wise, for prestige.
One drawback about Scholar Metrics is that the system only dates back to 2012, and included publications have to conform to particular standards in order for Google’s robots to sniff out the requisite data.
I gave it a test-spin today, and it’s solid, but hampered by some of the same dense computer science jargon that makes the storied Impact Factor a shitty, convoluted measure. Here’s to Google making this more grounded, with continued support.
You can read more here.
(Photo courtesy of The National Eye Institute)