currently the robots.txt file is useless because it's interpreted as one path "/icons/ /fonts/ *.js *.css" (an example path that would be accepted -- and therefore disallowed for robots) by this regex would be `https://cobalt.tools/icons/ /fonts/ bla.js .css`, which is obviously nonsense & useless)
6 lines
83 B
Plaintext
6 lines
83 B
Plaintext
User-Agent: *
|
|
Disallow: /icons/
|
|
Disallow: /fonts/
|
|
Disallow: /*.js
|
|
Disallow: /*.css
|